2026-01-17 00:00:06.272677 | Job console starting 2026-01-17 00:00:06.301444 | Updating git repos 2026-01-17 00:00:06.447992 | Cloning repos into workspace 2026-01-17 00:00:06.673372 | Restoring repo states 2026-01-17 00:00:06.697292 | Merging changes 2026-01-17 00:00:06.697310 | Checking out repos 2026-01-17 00:00:07.144814 | Preparing playbooks 2026-01-17 00:00:07.894033 | Running Ansible setup 2026-01-17 00:00:13.747079 | PRE-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/pre.yaml@main] 2026-01-17 00:00:15.234659 | 2026-01-17 00:00:15.234773 | PLAY [Base pre] 2026-01-17 00:00:15.284865 | 2026-01-17 00:00:15.285008 | TASK [Setup log path fact] 2026-01-17 00:00:15.326522 | orchestrator | ok 2026-01-17 00:00:15.412542 | 2026-01-17 00:00:15.412669 | TASK [set-zuul-log-path-fact : Set log path for a build] 2026-01-17 00:00:15.467741 | orchestrator | ok 2026-01-17 00:00:15.507231 | 2026-01-17 00:00:15.507352 | TASK [emit-job-header : Print job information] 2026-01-17 00:00:15.587367 | # Job Information 2026-01-17 00:00:15.587529 | Ansible Version: 2.16.14 2026-01-17 00:00:15.587565 | Job: testbed-deploy-current-in-a-nutshell-with-tempest-ubuntu-24.04 2026-01-17 00:00:15.587598 | Pipeline: periodic-midnight 2026-01-17 00:00:15.587622 | Executor: 521e9411259a 2026-01-17 00:00:15.587642 | Triggered by: https://github.com/osism/testbed 2026-01-17 00:00:15.587664 | Event ID: c48977e25ca54854840ff4e1512436af 2026-01-17 00:00:15.597307 | 2026-01-17 00:00:15.597417 | LOOP [emit-job-header : Print node information] 2026-01-17 00:00:15.986878 | orchestrator | ok: 2026-01-17 00:00:15.987056 | orchestrator | # Node Information 2026-01-17 00:00:15.987086 | orchestrator | Inventory Hostname: orchestrator 2026-01-17 00:00:15.987107 | orchestrator | Hostname: zuul-static-regiocloud-infra-1 2026-01-17 00:00:15.987125 | orchestrator | Username: zuul-testbed06 2026-01-17 00:00:15.987142 | orchestrator | Distro: Debian 12.13 2026-01-17 00:00:15.987302 | orchestrator | Provider: static-testbed 2026-01-17 00:00:15.987342 | orchestrator | Region: 2026-01-17 00:00:15.987362 | orchestrator | Label: testbed-orchestrator 2026-01-17 00:00:15.987379 | orchestrator | Product Name: OpenStack Nova 2026-01-17 00:00:15.987396 | orchestrator | Interface IP: 81.163.193.140 2026-01-17 00:00:16.009205 | 2026-01-17 00:00:16.009314 | TASK [log-inventory : Ensure Zuul Ansible directory exists] 2026-01-17 00:00:17.678569 | orchestrator -> localhost | changed 2026-01-17 00:00:17.688613 | 2026-01-17 00:00:17.688717 | TASK [log-inventory : Copy ansible inventory to logs dir] 2026-01-17 00:00:21.169820 | orchestrator -> localhost | changed 2026-01-17 00:00:21.186495 | 2026-01-17 00:00:21.186594 | TASK [add-build-sshkey : Check to see if ssh key was already created for this build] 2026-01-17 00:00:22.038437 | orchestrator -> localhost | ok 2026-01-17 00:00:22.050983 | 2026-01-17 00:00:22.051091 | TASK [add-build-sshkey : Create a new key in workspace based on build UUID] 2026-01-17 00:00:22.113739 | orchestrator | ok 2026-01-17 00:00:22.144719 | orchestrator | included: /var/lib/zuul/builds/a542b7811fc84384a6deed4810765420/trusted/project_1/github.com/osism/openinfra-zuul-jobs/roles/add-build-sshkey/tasks/create-key-and-replace.yaml 2026-01-17 00:00:22.166337 | 2026-01-17 00:00:22.166444 | TASK [add-build-sshkey : Create Temp SSH key] 2026-01-17 00:00:25.135701 | orchestrator -> localhost | Generating public/private rsa key pair. 2026-01-17 00:00:25.136694 | orchestrator -> localhost | Your identification has been saved in /var/lib/zuul/builds/a542b7811fc84384a6deed4810765420/work/a542b7811fc84384a6deed4810765420_id_rsa 2026-01-17 00:00:25.136763 | orchestrator -> localhost | Your public key has been saved in /var/lib/zuul/builds/a542b7811fc84384a6deed4810765420/work/a542b7811fc84384a6deed4810765420_id_rsa.pub 2026-01-17 00:00:25.136787 | orchestrator -> localhost | The key fingerprint is: 2026-01-17 00:00:25.136808 | orchestrator -> localhost | SHA256:A4wD75siwFOxtfOovbviw2L7MyEbVHn1dcFoUGl7uxc zuul-build-sshkey 2026-01-17 00:00:25.136827 | orchestrator -> localhost | The key's randomart image is: 2026-01-17 00:00:25.136852 | orchestrator -> localhost | +---[RSA 3072]----+ 2026-01-17 00:00:25.136870 | orchestrator -> localhost | | ...... .oo+o. | 2026-01-17 00:00:25.136888 | orchestrator -> localhost | | =++. . .=.. | 2026-01-17 00:00:25.136952 | orchestrator -> localhost | | .o=oo .o . | 2026-01-17 00:00:25.136970 | orchestrator -> localhost | |.... .+. . . | 2026-01-17 00:00:25.136986 | orchestrator -> localhost | |oo .. .S . . | 2026-01-17 00:00:25.137007 | orchestrator -> localhost | |.o..oo . . E | 2026-01-17 00:00:25.137024 | orchestrator -> localhost | |. *.+. . .| 2026-01-17 00:00:25.137040 | orchestrator -> localhost | | = B . . . | 2026-01-17 00:00:25.137057 | orchestrator -> localhost | |..=o=+o . | 2026-01-17 00:00:25.137073 | orchestrator -> localhost | +----[SHA256]-----+ 2026-01-17 00:00:25.137126 | orchestrator -> localhost | ok: Runtime: 0:00:01.759506 2026-01-17 00:00:25.143926 | 2026-01-17 00:00:25.144008 | TASK [add-build-sshkey : Remote setup ssh keys (linux)] 2026-01-17 00:00:25.181747 | orchestrator | ok 2026-01-17 00:00:25.201306 | orchestrator | included: /var/lib/zuul/builds/a542b7811fc84384a6deed4810765420/trusted/project_1/github.com/osism/openinfra-zuul-jobs/roles/add-build-sshkey/tasks/remote-linux.yaml 2026-01-17 00:00:25.230093 | 2026-01-17 00:00:25.231175 | TASK [add-build-sshkey : Remove previously added zuul-build-sshkey] 2026-01-17 00:00:25.270872 | orchestrator | skipping: Conditional result was False 2026-01-17 00:00:25.278412 | 2026-01-17 00:00:25.278499 | TASK [add-build-sshkey : Enable access via build key on all nodes] 2026-01-17 00:00:26.402198 | orchestrator | changed 2026-01-17 00:00:26.408332 | 2026-01-17 00:00:26.408417 | TASK [add-build-sshkey : Make sure user has a .ssh] 2026-01-17 00:00:26.738546 | orchestrator | ok 2026-01-17 00:00:26.743665 | 2026-01-17 00:00:26.743742 | TASK [add-build-sshkey : Install build private key as SSH key on all nodes] 2026-01-17 00:00:27.263479 | orchestrator | ok 2026-01-17 00:00:27.270980 | 2026-01-17 00:00:27.271073 | TASK [add-build-sshkey : Install build public key as SSH key on all nodes] 2026-01-17 00:00:27.780027 | orchestrator | ok 2026-01-17 00:00:27.788007 | 2026-01-17 00:00:27.788090 | TASK [add-build-sshkey : Remote setup ssh keys (windows)] 2026-01-17 00:00:27.830983 | orchestrator | skipping: Conditional result was False 2026-01-17 00:00:27.848783 | 2026-01-17 00:00:27.848874 | TASK [remove-zuul-sshkey : Remove master key from local agent] 2026-01-17 00:00:29.035551 | orchestrator -> localhost | changed 2026-01-17 00:00:29.050579 | 2026-01-17 00:00:29.050676 | TASK [add-build-sshkey : Add back temp key] 2026-01-17 00:00:29.998443 | orchestrator -> localhost | Identity added: /var/lib/zuul/builds/a542b7811fc84384a6deed4810765420/work/a542b7811fc84384a6deed4810765420_id_rsa (zuul-build-sshkey) 2026-01-17 00:00:29.998631 | orchestrator -> localhost | ok: Runtime: 0:00:00.084254 2026-01-17 00:00:30.004472 | 2026-01-17 00:00:30.004555 | TASK [add-build-sshkey : Verify we can still SSH to all nodes] 2026-01-17 00:00:30.885566 | orchestrator | ok 2026-01-17 00:00:30.894143 | 2026-01-17 00:00:30.894241 | TASK [add-build-sshkey : Verify we can still SSH to all nodes (windows)] 2026-01-17 00:00:30.969480 | orchestrator | skipping: Conditional result was False 2026-01-17 00:00:31.048601 | 2026-01-17 00:00:31.048703 | TASK [start-zuul-console : Start zuul_console daemon.] 2026-01-17 00:00:31.481308 | orchestrator | ok 2026-01-17 00:00:31.496466 | 2026-01-17 00:00:31.496563 | TASK [validate-host : Define zuul_info_dir fact] 2026-01-17 00:00:31.548697 | orchestrator | ok 2026-01-17 00:00:31.555145 | 2026-01-17 00:00:31.555234 | TASK [validate-host : Ensure Zuul Ansible directory exists] 2026-01-17 00:00:32.377487 | orchestrator -> localhost | ok 2026-01-17 00:00:32.384723 | 2026-01-17 00:00:32.384816 | TASK [validate-host : Collect information about the host] 2026-01-17 00:00:33.736500 | orchestrator | ok 2026-01-17 00:00:33.790637 | 2026-01-17 00:00:33.790756 | TASK [validate-host : Sanitize hostname] 2026-01-17 00:00:33.940307 | orchestrator | ok 2026-01-17 00:00:33.946013 | 2026-01-17 00:00:33.946135 | TASK [validate-host : Write out all ansible variables/facts known for each host] 2026-01-17 00:00:35.185567 | orchestrator -> localhost | changed 2026-01-17 00:00:35.191925 | 2026-01-17 00:00:35.192077 | TASK [validate-host : Collect information about zuul worker] 2026-01-17 00:00:35.746379 | orchestrator | ok 2026-01-17 00:00:35.759403 | 2026-01-17 00:00:35.759512 | TASK [validate-host : Write out all zuul information for each host] 2026-01-17 00:00:37.234987 | orchestrator -> localhost | changed 2026-01-17 00:00:37.245081 | 2026-01-17 00:00:37.245172 | TASK [prepare-workspace-log : Start zuul_console daemon.] 2026-01-17 00:00:37.574883 | orchestrator | ok 2026-01-17 00:00:37.581502 | 2026-01-17 00:00:37.581584 | TASK [prepare-workspace-log : Synchronize src repos to workspace directory.] 2026-01-17 00:01:57.730295 | orchestrator | changed: 2026-01-17 00:01:57.731737 | orchestrator | .d..t...... src/ 2026-01-17 00:01:57.731824 | orchestrator | .d..t...... src/github.com/ 2026-01-17 00:01:57.731858 | orchestrator | .d..t...... src/github.com/osism/ 2026-01-17 00:01:57.731887 | orchestrator | .d..t...... src/github.com/osism/ansible-collection-commons/ 2026-01-17 00:01:57.731916 | orchestrator | RedHat.yml 2026-01-17 00:01:57.751046 | orchestrator | .L..t...... src/github.com/osism/ansible-collection-commons/roles/repository/tasks/CentOS.yml -> RedHat.yml 2026-01-17 00:01:57.751065 | orchestrator | RedHat.yml 2026-01-17 00:01:57.751119 | orchestrator | = 1.53.0"... 2026-01-17 00:02:13.849861 | orchestrator | - Finding hashicorp/local versions matching ">= 2.2.0"... 2026-01-17 00:02:14.296519 | orchestrator | - Installing hashicorp/null v3.2.4... 2026-01-17 00:02:14.925748 | orchestrator | - Installed hashicorp/null v3.2.4 (signed, key ID 0C0AF313E5FD9F80) 2026-01-17 00:02:15.324088 | orchestrator | - Installing terraform-provider-openstack/openstack v3.4.0... 2026-01-17 00:02:16.310834 | orchestrator | - Installed terraform-provider-openstack/openstack v3.4.0 (signed, key ID 4F80527A391BEFD2) 2026-01-17 00:02:16.372227 | orchestrator | - Installing hashicorp/local v2.6.1... 2026-01-17 00:02:16.901910 | orchestrator | - Installed hashicorp/local v2.6.1 (signed, key ID 0C0AF313E5FD9F80) 2026-01-17 00:02:16.901974 | orchestrator | 2026-01-17 00:02:16.901981 | orchestrator | Providers are signed by their developers. 2026-01-17 00:02:16.901986 | orchestrator | If you'd like to know more about provider signing, you can read about it here: 2026-01-17 00:02:16.901991 | orchestrator | https://opentofu.org/docs/cli/plugins/signing/ 2026-01-17 00:02:16.901999 | orchestrator | 2026-01-17 00:02:16.902003 | orchestrator | OpenTofu has created a lock file .terraform.lock.hcl to record the provider 2026-01-17 00:02:16.902008 | orchestrator | selections it made above. Include this file in your version control repository 2026-01-17 00:02:16.902038 | orchestrator | so that OpenTofu can guarantee to make the same selections by default when 2026-01-17 00:02:16.902044 | orchestrator | you run "tofu init" in the future. 2026-01-17 00:02:16.902610 | orchestrator | 2026-01-17 00:02:16.902653 | orchestrator | OpenTofu has been successfully initialized! 2026-01-17 00:02:16.902686 | orchestrator | 2026-01-17 00:02:16.902693 | orchestrator | You may now begin working with OpenTofu. Try running "tofu plan" to see 2026-01-17 00:02:16.902699 | orchestrator | any changes that are required for your infrastructure. All OpenTofu commands 2026-01-17 00:02:16.902706 | orchestrator | should now work. 2026-01-17 00:02:16.902713 | orchestrator | 2026-01-17 00:02:16.902720 | orchestrator | If you ever set or change modules or backend configuration for OpenTofu, 2026-01-17 00:02:16.902727 | orchestrator | rerun this command to reinitialize your working directory. If you forget, other 2026-01-17 00:02:16.902746 | orchestrator | commands will detect it and remind you to do so if necessary. 2026-01-17 00:02:17.092526 | orchestrator | Created and switched to workspace "ci"! 2026-01-17 00:02:17.092624 | orchestrator | 2026-01-17 00:02:17.092637 | orchestrator | You're now on a new, empty workspace. Workspaces isolate their state, 2026-01-17 00:02:17.092647 | orchestrator | so if you run "tofu plan" OpenTofu will not see any existing state 2026-01-17 00:02:17.092655 | orchestrator | for this configuration. 2026-01-17 00:02:17.232368 | orchestrator | ci.auto.tfvars 2026-01-17 00:02:17.242100 | orchestrator | default_custom.tf 2026-01-17 00:02:18.178184 | orchestrator | data.openstack_networking_network_v2.public: Reading... 2026-01-17 00:02:18.712395 | orchestrator | data.openstack_networking_network_v2.public: Read complete after 1s [id=e6be7364-bfd8-4de7-8120-8f41c69a139a] 2026-01-17 00:02:19.599076 | orchestrator | 2026-01-17 00:02:19.599146 | orchestrator | OpenTofu used the selected providers to generate the following execution 2026-01-17 00:02:19.599176 | orchestrator | plan. Resource actions are indicated with the following symbols: 2026-01-17 00:02:19.599228 | orchestrator | + create 2026-01-17 00:02:19.599252 | orchestrator | <= read (data resources) 2026-01-17 00:02:19.599272 | orchestrator | 2026-01-17 00:02:19.599277 | orchestrator | OpenTofu will perform the following actions: 2026-01-17 00:02:19.599400 | orchestrator | 2026-01-17 00:02:19.599413 | orchestrator | # data.openstack_images_image_v2.image will be read during apply 2026-01-17 00:02:19.599418 | orchestrator | # (config refers to values not yet known) 2026-01-17 00:02:19.599422 | orchestrator | <= data "openstack_images_image_v2" "image" { 2026-01-17 00:02:19.599427 | orchestrator | + checksum = (known after apply) 2026-01-17 00:02:19.599431 | orchestrator | + created_at = (known after apply) 2026-01-17 00:02:19.599435 | orchestrator | + file = (known after apply) 2026-01-17 00:02:19.599439 | orchestrator | + id = (known after apply) 2026-01-17 00:02:19.599467 | orchestrator | + metadata = (known after apply) 2026-01-17 00:02:19.599474 | orchestrator | + min_disk_gb = (known after apply) 2026-01-17 00:02:19.599480 | orchestrator | + min_ram_mb = (known after apply) 2026-01-17 00:02:19.599486 | orchestrator | + most_recent = true 2026-01-17 00:02:19.599492 | orchestrator | + name = (known after apply) 2026-01-17 00:02:19.599498 | orchestrator | + protected = (known after apply) 2026-01-17 00:02:19.599505 | orchestrator | + region = (known after apply) 2026-01-17 00:02:19.599515 | orchestrator | + schema = (known after apply) 2026-01-17 00:02:19.599521 | orchestrator | + size_bytes = (known after apply) 2026-01-17 00:02:19.599528 | orchestrator | + tags = (known after apply) 2026-01-17 00:02:19.599534 | orchestrator | + updated_at = (known after apply) 2026-01-17 00:02:19.599540 | orchestrator | } 2026-01-17 00:02:19.599695 | orchestrator | 2026-01-17 00:02:19.599724 | orchestrator | # data.openstack_images_image_v2.image_node will be read during apply 2026-01-17 00:02:19.599731 | orchestrator | # (config refers to values not yet known) 2026-01-17 00:02:19.599738 | orchestrator | <= data "openstack_images_image_v2" "image_node" { 2026-01-17 00:02:19.599745 | orchestrator | + checksum = (known after apply) 2026-01-17 00:02:19.599751 | orchestrator | + created_at = (known after apply) 2026-01-17 00:02:19.599757 | orchestrator | + file = (known after apply) 2026-01-17 00:02:19.599764 | orchestrator | + id = (known after apply) 2026-01-17 00:02:19.599771 | orchestrator | + metadata = (known after apply) 2026-01-17 00:02:19.599778 | orchestrator | + min_disk_gb = (known after apply) 2026-01-17 00:02:19.599811 | orchestrator | + min_ram_mb = (known after apply) 2026-01-17 00:02:19.599817 | orchestrator | + most_recent = true 2026-01-17 00:02:19.599824 | orchestrator | + name = (known after apply) 2026-01-17 00:02:19.599830 | orchestrator | + protected = (known after apply) 2026-01-17 00:02:19.599837 | orchestrator | + region = (known after apply) 2026-01-17 00:02:19.599845 | orchestrator | + schema = (known after apply) 2026-01-17 00:02:19.599852 | orchestrator | + size_bytes = (known after apply) 2026-01-17 00:02:19.599859 | orchestrator | + tags = (known after apply) 2026-01-17 00:02:19.599865 | orchestrator | + updated_at = (known after apply) 2026-01-17 00:02:19.599872 | orchestrator | } 2026-01-17 00:02:19.600069 | orchestrator | 2026-01-17 00:02:19.600106 | orchestrator | # local_file.MANAGER_ADDRESS will be created 2026-01-17 00:02:19.600114 | orchestrator | + resource "local_file" "MANAGER_ADDRESS" { 2026-01-17 00:02:19.600122 | orchestrator | + content = (known after apply) 2026-01-17 00:02:19.600127 | orchestrator | + content_base64sha256 = (known after apply) 2026-01-17 00:02:19.600131 | orchestrator | + content_base64sha512 = (known after apply) 2026-01-17 00:02:19.600135 | orchestrator | + content_md5 = (known after apply) 2026-01-17 00:02:19.600139 | orchestrator | + content_sha1 = (known after apply) 2026-01-17 00:02:19.600143 | orchestrator | + content_sha256 = (known after apply) 2026-01-17 00:02:19.600147 | orchestrator | + content_sha512 = (known after apply) 2026-01-17 00:02:19.600151 | orchestrator | + directory_permission = "0777" 2026-01-17 00:02:19.600155 | orchestrator | + file_permission = "0644" 2026-01-17 00:02:19.600159 | orchestrator | + filename = ".MANAGER_ADDRESS.ci" 2026-01-17 00:02:19.600163 | orchestrator | + id = (known after apply) 2026-01-17 00:02:19.600166 | orchestrator | } 2026-01-17 00:02:19.600274 | orchestrator | 2026-01-17 00:02:19.600293 | orchestrator | # local_file.id_rsa_pub will be created 2026-01-17 00:02:19.600301 | orchestrator | + resource "local_file" "id_rsa_pub" { 2026-01-17 00:02:19.600307 | orchestrator | + content = (known after apply) 2026-01-17 00:02:19.600314 | orchestrator | + content_base64sha256 = (known after apply) 2026-01-17 00:02:19.600318 | orchestrator | + content_base64sha512 = (known after apply) 2026-01-17 00:02:19.600322 | orchestrator | + content_md5 = (known after apply) 2026-01-17 00:02:19.600326 | orchestrator | + content_sha1 = (known after apply) 2026-01-17 00:02:19.600330 | orchestrator | + content_sha256 = (known after apply) 2026-01-17 00:02:19.600334 | orchestrator | + content_sha512 = (known after apply) 2026-01-17 00:02:19.600338 | orchestrator | + directory_permission = "0777" 2026-01-17 00:02:19.600342 | orchestrator | + file_permission = "0644" 2026-01-17 00:02:19.600362 | orchestrator | + filename = ".id_rsa.ci.pub" 2026-01-17 00:02:19.600369 | orchestrator | + id = (known after apply) 2026-01-17 00:02:19.600375 | orchestrator | } 2026-01-17 00:02:19.600474 | orchestrator | 2026-01-17 00:02:19.600495 | orchestrator | # local_file.inventory will be created 2026-01-17 00:02:19.600500 | orchestrator | + resource "local_file" "inventory" { 2026-01-17 00:02:19.600504 | orchestrator | + content = (known after apply) 2026-01-17 00:02:19.600507 | orchestrator | + content_base64sha256 = (known after apply) 2026-01-17 00:02:19.600511 | orchestrator | + content_base64sha512 = (known after apply) 2026-01-17 00:02:19.600515 | orchestrator | + content_md5 = (known after apply) 2026-01-17 00:02:19.600519 | orchestrator | + content_sha1 = (known after apply) 2026-01-17 00:02:19.600524 | orchestrator | + content_sha256 = (known after apply) 2026-01-17 00:02:19.600527 | orchestrator | + content_sha512 = (known after apply) 2026-01-17 00:02:19.600531 | orchestrator | + directory_permission = "0777" 2026-01-17 00:02:19.600535 | orchestrator | + file_permission = "0644" 2026-01-17 00:02:19.600539 | orchestrator | + filename = "inventory.ci" 2026-01-17 00:02:19.600542 | orchestrator | + id = (known after apply) 2026-01-17 00:02:19.600546 | orchestrator | } 2026-01-17 00:02:19.600623 | orchestrator | 2026-01-17 00:02:19.600635 | orchestrator | # local_sensitive_file.id_rsa will be created 2026-01-17 00:02:19.600639 | orchestrator | + resource "local_sensitive_file" "id_rsa" { 2026-01-17 00:02:19.600643 | orchestrator | + content = (sensitive value) 2026-01-17 00:02:19.600647 | orchestrator | + content_base64sha256 = (known after apply) 2026-01-17 00:02:19.600650 | orchestrator | + content_base64sha512 = (known after apply) 2026-01-17 00:02:19.600654 | orchestrator | + content_md5 = (known after apply) 2026-01-17 00:02:19.600658 | orchestrator | + content_sha1 = (known after apply) 2026-01-17 00:02:19.600662 | orchestrator | + content_sha256 = (known after apply) 2026-01-17 00:02:19.600665 | orchestrator | + content_sha512 = (known after apply) 2026-01-17 00:02:19.600669 | orchestrator | + directory_permission = "0700" 2026-01-17 00:02:19.600673 | orchestrator | + file_permission = "0600" 2026-01-17 00:02:19.600677 | orchestrator | + filename = ".id_rsa.ci" 2026-01-17 00:02:19.600680 | orchestrator | + id = (known after apply) 2026-01-17 00:02:19.600684 | orchestrator | } 2026-01-17 00:02:19.600706 | orchestrator | 2026-01-17 00:02:19.600718 | orchestrator | # null_resource.node_semaphore will be created 2026-01-17 00:02:19.600726 | orchestrator | + resource "null_resource" "node_semaphore" { 2026-01-17 00:02:19.600732 | orchestrator | + id = (known after apply) 2026-01-17 00:02:19.600738 | orchestrator | } 2026-01-17 00:02:19.600869 | orchestrator | 2026-01-17 00:02:19.604357 | orchestrator | # openstack_blockstorage_volume_v3.manager_base_volume[0] will be created 2026-01-17 00:02:19.604405 | orchestrator | + resource "openstack_blockstorage_volume_v3" "manager_base_volume" { 2026-01-17 00:02:19.604413 | orchestrator | + attachment = (known after apply) 2026-01-17 00:02:19.604420 | orchestrator | + availability_zone = "nova" 2026-01-17 00:02:19.604425 | orchestrator | + id = (known after apply) 2026-01-17 00:02:19.604432 | orchestrator | + image_id = (known after apply) 2026-01-17 00:02:19.604438 | orchestrator | + metadata = (known after apply) 2026-01-17 00:02:19.604445 | orchestrator | + name = "testbed-volume-manager-base" 2026-01-17 00:02:19.604450 | orchestrator | + region = (known after apply) 2026-01-17 00:02:19.604454 | orchestrator | + size = 80 2026-01-17 00:02:19.604458 | orchestrator | + volume_retype_policy = "never" 2026-01-17 00:02:19.604462 | orchestrator | + volume_type = "ssd" 2026-01-17 00:02:19.604466 | orchestrator | } 2026-01-17 00:02:19.604586 | orchestrator | 2026-01-17 00:02:19.604599 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[0] will be created 2026-01-17 00:02:19.604603 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-01-17 00:02:19.604607 | orchestrator | + attachment = (known after apply) 2026-01-17 00:02:19.604611 | orchestrator | + availability_zone = "nova" 2026-01-17 00:02:19.604616 | orchestrator | + id = (known after apply) 2026-01-17 00:02:19.604634 | orchestrator | + image_id = (known after apply) 2026-01-17 00:02:19.604641 | orchestrator | + metadata = (known after apply) 2026-01-17 00:02:19.604647 | orchestrator | + name = "testbed-volume-0-node-base" 2026-01-17 00:02:19.604653 | orchestrator | + region = (known after apply) 2026-01-17 00:02:19.604659 | orchestrator | + size = 80 2026-01-17 00:02:19.604664 | orchestrator | + volume_retype_policy = "never" 2026-01-17 00:02:19.604671 | orchestrator | + volume_type = "ssd" 2026-01-17 00:02:19.604677 | orchestrator | } 2026-01-17 00:02:19.604774 | orchestrator | 2026-01-17 00:02:19.604806 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[1] will be created 2026-01-17 00:02:19.604812 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-01-17 00:02:19.604815 | orchestrator | + attachment = (known after apply) 2026-01-17 00:02:19.604819 | orchestrator | + availability_zone = "nova" 2026-01-17 00:02:19.604823 | orchestrator | + id = (known after apply) 2026-01-17 00:02:19.604827 | orchestrator | + image_id = (known after apply) 2026-01-17 00:02:19.604831 | orchestrator | + metadata = (known after apply) 2026-01-17 00:02:19.604835 | orchestrator | + name = "testbed-volume-1-node-base" 2026-01-17 00:02:19.604839 | orchestrator | + region = (known after apply) 2026-01-17 00:02:19.604842 | orchestrator | + size = 80 2026-01-17 00:02:19.604846 | orchestrator | + volume_retype_policy = "never" 2026-01-17 00:02:19.604850 | orchestrator | + volume_type = "ssd" 2026-01-17 00:02:19.604854 | orchestrator | } 2026-01-17 00:02:19.604952 | orchestrator | 2026-01-17 00:02:19.604971 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[2] will be created 2026-01-17 00:02:19.604977 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-01-17 00:02:19.604983 | orchestrator | + attachment = (known after apply) 2026-01-17 00:02:19.604987 | orchestrator | + availability_zone = "nova" 2026-01-17 00:02:19.604991 | orchestrator | + id = (known after apply) 2026-01-17 00:02:19.604995 | orchestrator | + image_id = (known after apply) 2026-01-17 00:02:19.604998 | orchestrator | + metadata = (known after apply) 2026-01-17 00:02:19.605002 | orchestrator | + name = "testbed-volume-2-node-base" 2026-01-17 00:02:19.605006 | orchestrator | + region = (known after apply) 2026-01-17 00:02:19.605010 | orchestrator | + size = 80 2026-01-17 00:02:19.605013 | orchestrator | + volume_retype_policy = "never" 2026-01-17 00:02:19.605017 | orchestrator | + volume_type = "ssd" 2026-01-17 00:02:19.605021 | orchestrator | } 2026-01-17 00:02:19.605089 | orchestrator | 2026-01-17 00:02:19.605100 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[3] will be created 2026-01-17 00:02:19.605105 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-01-17 00:02:19.605108 | orchestrator | + attachment = (known after apply) 2026-01-17 00:02:19.605112 | orchestrator | + availability_zone = "nova" 2026-01-17 00:02:19.605116 | orchestrator | + id = (known after apply) 2026-01-17 00:02:19.605120 | orchestrator | + image_id = (known after apply) 2026-01-17 00:02:19.605124 | orchestrator | + metadata = (known after apply) 2026-01-17 00:02:19.605134 | orchestrator | + name = "testbed-volume-3-node-base" 2026-01-17 00:02:19.605138 | orchestrator | + region = (known after apply) 2026-01-17 00:02:19.605142 | orchestrator | + size = 80 2026-01-17 00:02:19.605146 | orchestrator | + volume_retype_policy = "never" 2026-01-17 00:02:19.605149 | orchestrator | + volume_type = "ssd" 2026-01-17 00:02:19.605153 | orchestrator | } 2026-01-17 00:02:19.605217 | orchestrator | 2026-01-17 00:02:19.605228 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[4] will be created 2026-01-17 00:02:19.605233 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-01-17 00:02:19.605237 | orchestrator | + attachment = (known after apply) 2026-01-17 00:02:19.605241 | orchestrator | + availability_zone = "nova" 2026-01-17 00:02:19.605244 | orchestrator | + id = (known after apply) 2026-01-17 00:02:19.605255 | orchestrator | + image_id = (known after apply) 2026-01-17 00:02:19.605259 | orchestrator | + metadata = (known after apply) 2026-01-17 00:02:19.605262 | orchestrator | + name = "testbed-volume-4-node-base" 2026-01-17 00:02:19.605266 | orchestrator | + region = (known after apply) 2026-01-17 00:02:19.605270 | orchestrator | + size = 80 2026-01-17 00:02:19.605274 | orchestrator | + volume_retype_policy = "never" 2026-01-17 00:02:19.605278 | orchestrator | + volume_type = "ssd" 2026-01-17 00:02:19.605284 | orchestrator | } 2026-01-17 00:02:19.605710 | orchestrator | 2026-01-17 00:02:19.605742 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[5] will be created 2026-01-17 00:02:19.605747 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-01-17 00:02:19.605752 | orchestrator | + attachment = (known after apply) 2026-01-17 00:02:19.605756 | orchestrator | + availability_zone = "nova" 2026-01-17 00:02:19.605759 | orchestrator | + id = (known after apply) 2026-01-17 00:02:19.605763 | orchestrator | + image_id = (known after apply) 2026-01-17 00:02:19.605767 | orchestrator | + metadata = (known after apply) 2026-01-17 00:02:19.605772 | orchestrator | + name = "testbed-volume-5-node-base" 2026-01-17 00:02:19.605776 | orchestrator | + region = (known after apply) 2026-01-17 00:02:19.605818 | orchestrator | + size = 80 2026-01-17 00:02:19.605823 | orchestrator | + volume_retype_policy = "never" 2026-01-17 00:02:19.605827 | orchestrator | + volume_type = "ssd" 2026-01-17 00:02:19.605831 | orchestrator | } 2026-01-17 00:02:19.605943 | orchestrator | 2026-01-17 00:02:19.605959 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[0] will be created 2026-01-17 00:02:19.605969 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-01-17 00:02:19.605976 | orchestrator | + attachment = (known after apply) 2026-01-17 00:02:19.605982 | orchestrator | + availability_zone = "nova" 2026-01-17 00:02:19.605988 | orchestrator | + id = (known after apply) 2026-01-17 00:02:19.605995 | orchestrator | + metadata = (known after apply) 2026-01-17 00:02:19.606001 | orchestrator | + name = "testbed-volume-0-node-3" 2026-01-17 00:02:19.606008 | orchestrator | + region = (known after apply) 2026-01-17 00:02:19.606036 | orchestrator | + size = 20 2026-01-17 00:02:19.606041 | orchestrator | + volume_retype_policy = "never" 2026-01-17 00:02:19.606045 | orchestrator | + volume_type = "ssd" 2026-01-17 00:02:19.606049 | orchestrator | } 2026-01-17 00:02:19.606173 | orchestrator | 2026-01-17 00:02:19.606187 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[1] will be created 2026-01-17 00:02:19.606192 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-01-17 00:02:19.606196 | orchestrator | + attachment = (known after apply) 2026-01-17 00:02:19.606200 | orchestrator | + availability_zone = "nova" 2026-01-17 00:02:19.606204 | orchestrator | + id = (known after apply) 2026-01-17 00:02:19.606208 | orchestrator | + metadata = (known after apply) 2026-01-17 00:02:19.606211 | orchestrator | + name = "testbed-volume-1-node-4" 2026-01-17 00:02:19.606215 | orchestrator | + region = (known after apply) 2026-01-17 00:02:19.606219 | orchestrator | + size = 20 2026-01-17 00:02:19.606223 | orchestrator | + volume_retype_policy = "never" 2026-01-17 00:02:19.606226 | orchestrator | + volume_type = "ssd" 2026-01-17 00:02:19.606230 | orchestrator | } 2026-01-17 00:02:19.606350 | orchestrator | 2026-01-17 00:02:19.606366 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[2] will be created 2026-01-17 00:02:19.606370 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-01-17 00:02:19.606374 | orchestrator | + attachment = (known after apply) 2026-01-17 00:02:19.606378 | orchestrator | + availability_zone = "nova" 2026-01-17 00:02:19.606381 | orchestrator | + id = (known after apply) 2026-01-17 00:02:19.606385 | orchestrator | + metadata = (known after apply) 2026-01-17 00:02:19.606389 | orchestrator | + name = "testbed-volume-2-node-5" 2026-01-17 00:02:19.606393 | orchestrator | + region = (known after apply) 2026-01-17 00:02:19.606405 | orchestrator | + size = 20 2026-01-17 00:02:19.606409 | orchestrator | + volume_retype_policy = "never" 2026-01-17 00:02:19.606412 | orchestrator | + volume_type = "ssd" 2026-01-17 00:02:19.606416 | orchestrator | } 2026-01-17 00:02:19.606494 | orchestrator | 2026-01-17 00:02:19.606505 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[3] will be created 2026-01-17 00:02:19.606509 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-01-17 00:02:19.606513 | orchestrator | + attachment = (known after apply) 2026-01-17 00:02:19.606517 | orchestrator | + availability_zone = "nova" 2026-01-17 00:02:19.606521 | orchestrator | + id = (known after apply) 2026-01-17 00:02:19.606524 | orchestrator | + metadata = (known after apply) 2026-01-17 00:02:19.606528 | orchestrator | + name = "testbed-volume-3-node-3" 2026-01-17 00:02:19.606532 | orchestrator | + region = (known after apply) 2026-01-17 00:02:19.606536 | orchestrator | + size = 20 2026-01-17 00:02:19.606539 | orchestrator | + volume_retype_policy = "never" 2026-01-17 00:02:19.606543 | orchestrator | + volume_type = "ssd" 2026-01-17 00:02:19.606547 | orchestrator | } 2026-01-17 00:02:19.606741 | orchestrator | 2026-01-17 00:02:19.606749 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[4] will be created 2026-01-17 00:02:19.606752 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-01-17 00:02:19.606756 | orchestrator | + attachment = (known after apply) 2026-01-17 00:02:19.606760 | orchestrator | + availability_zone = "nova" 2026-01-17 00:02:19.606764 | orchestrator | + id = (known after apply) 2026-01-17 00:02:19.606768 | orchestrator | + metadata = (known after apply) 2026-01-17 00:02:19.606771 | orchestrator | + name = "testbed-volume-4-node-4" 2026-01-17 00:02:19.606775 | orchestrator | + region = (known after apply) 2026-01-17 00:02:19.606803 | orchestrator | + size = 20 2026-01-17 00:02:19.606808 | orchestrator | + volume_retype_policy = "never" 2026-01-17 00:02:19.606812 | orchestrator | + volume_type = "ssd" 2026-01-17 00:02:19.606816 | orchestrator | } 2026-01-17 00:02:19.606911 | orchestrator | 2026-01-17 00:02:19.606915 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[5] will be created 2026-01-17 00:02:19.606919 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-01-17 00:02:19.606926 | orchestrator | + attachment = (known after apply) 2026-01-17 00:02:19.606932 | orchestrator | + availability_zone = "nova" 2026-01-17 00:02:19.606938 | orchestrator | + id = (known after apply) 2026-01-17 00:02:19.606943 | orchestrator | + metadata = (known after apply) 2026-01-17 00:02:19.606949 | orchestrator | + name = "testbed-volume-5-node-5" 2026-01-17 00:02:19.606955 | orchestrator | + region = (known after apply) 2026-01-17 00:02:19.606961 | orchestrator | + size = 20 2026-01-17 00:02:19.606968 | orchestrator | + volume_retype_policy = "never" 2026-01-17 00:02:19.606974 | orchestrator | + volume_type = "ssd" 2026-01-17 00:02:19.606979 | orchestrator | } 2026-01-17 00:02:19.606988 | orchestrator | 2026-01-17 00:02:19.606994 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[6] will be created 2026-01-17 00:02:19.607000 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-01-17 00:02:19.607006 | orchestrator | + attachment = (known after apply) 2026-01-17 00:02:19.607012 | orchestrator | + availability_zone = "nova" 2026-01-17 00:02:19.607019 | orchestrator | + id = (known after apply) 2026-01-17 00:02:19.607025 | orchestrator | + metadata = (known after apply) 2026-01-17 00:02:19.607032 | orchestrator | + name = "testbed-volume-6-node-3" 2026-01-17 00:02:19.607038 | orchestrator | + region = (known after apply) 2026-01-17 00:02:19.607044 | orchestrator | + size = 20 2026-01-17 00:02:19.607048 | orchestrator | + volume_retype_policy = "never" 2026-01-17 00:02:19.607052 | orchestrator | + volume_type = "ssd" 2026-01-17 00:02:19.607056 | orchestrator | } 2026-01-17 00:02:19.607062 | orchestrator | 2026-01-17 00:02:19.607066 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[7] will be created 2026-01-17 00:02:19.607070 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-01-17 00:02:19.607080 | orchestrator | + attachment = (known after apply) 2026-01-17 00:02:19.607084 | orchestrator | + availability_zone = "nova" 2026-01-17 00:02:19.607088 | orchestrator | + id = (known after apply) 2026-01-17 00:02:19.607091 | orchestrator | + metadata = (known after apply) 2026-01-17 00:02:19.607095 | orchestrator | + name = "testbed-volume-7-node-4" 2026-01-17 00:02:19.607099 | orchestrator | + region = (known after apply) 2026-01-17 00:02:19.607103 | orchestrator | + size = 20 2026-01-17 00:02:19.607107 | orchestrator | + volume_retype_policy = "never" 2026-01-17 00:02:19.607111 | orchestrator | + volume_type = "ssd" 2026-01-17 00:02:19.607114 | orchestrator | } 2026-01-17 00:02:19.607120 | orchestrator | 2026-01-17 00:02:19.607123 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[8] will be created 2026-01-17 00:02:19.607127 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-01-17 00:02:19.607131 | orchestrator | + attachment = (known after apply) 2026-01-17 00:02:19.607134 | orchestrator | + availability_zone = "nova" 2026-01-17 00:02:19.607138 | orchestrator | + id = (known after apply) 2026-01-17 00:02:19.607142 | orchestrator | + metadata = (known after apply) 2026-01-17 00:02:19.607145 | orchestrator | + name = "testbed-volume-8-node-5" 2026-01-17 00:02:19.607149 | orchestrator | + region = (known after apply) 2026-01-17 00:02:19.607153 | orchestrator | + size = 20 2026-01-17 00:02:19.607157 | orchestrator | + volume_retype_policy = "never" 2026-01-17 00:02:19.607160 | orchestrator | + volume_type = "ssd" 2026-01-17 00:02:19.607164 | orchestrator | } 2026-01-17 00:02:19.607394 | orchestrator | 2026-01-17 00:02:19.607399 | orchestrator | # openstack_compute_instance_v2.manager_server will be created 2026-01-17 00:02:19.607402 | orchestrator | + resource "openstack_compute_instance_v2" "manager_server" { 2026-01-17 00:02:19.607406 | orchestrator | + access_ip_v4 = (known after apply) 2026-01-17 00:02:19.607410 | orchestrator | + access_ip_v6 = (known after apply) 2026-01-17 00:02:19.607413 | orchestrator | + all_metadata = (known after apply) 2026-01-17 00:02:19.607417 | orchestrator | + all_tags = (known after apply) 2026-01-17 00:02:19.607421 | orchestrator | + availability_zone = "nova" 2026-01-17 00:02:19.607425 | orchestrator | + config_drive = true 2026-01-17 00:02:19.607428 | orchestrator | + created = (known after apply) 2026-01-17 00:02:19.607432 | orchestrator | + flavor_id = (known after apply) 2026-01-17 00:02:19.607436 | orchestrator | + flavor_name = "OSISM-4V-16" 2026-01-17 00:02:19.607439 | orchestrator | + force_delete = false 2026-01-17 00:02:19.607443 | orchestrator | + hypervisor_hostname = (known after apply) 2026-01-17 00:02:19.607447 | orchestrator | + id = (known after apply) 2026-01-17 00:02:19.607450 | orchestrator | + image_id = (known after apply) 2026-01-17 00:02:19.607454 | orchestrator | + image_name = (known after apply) 2026-01-17 00:02:19.607458 | orchestrator | + key_pair = "testbed" 2026-01-17 00:02:19.607462 | orchestrator | + name = "testbed-manager" 2026-01-17 00:02:19.607465 | orchestrator | + power_state = "active" 2026-01-17 00:02:19.607469 | orchestrator | + region = (known after apply) 2026-01-17 00:02:19.607473 | orchestrator | + security_groups = (known after apply) 2026-01-17 00:02:19.607476 | orchestrator | + stop_before_destroy = false 2026-01-17 00:02:19.607480 | orchestrator | + updated = (known after apply) 2026-01-17 00:02:19.607484 | orchestrator | + user_data = (sensitive value) 2026-01-17 00:02:19.607487 | orchestrator | 2026-01-17 00:02:19.607491 | orchestrator | + block_device { 2026-01-17 00:02:19.607495 | orchestrator | + boot_index = 0 2026-01-17 00:02:19.607499 | orchestrator | + delete_on_termination = false 2026-01-17 00:02:19.607507 | orchestrator | + destination_type = "volume" 2026-01-17 00:02:19.607511 | orchestrator | + multiattach = false 2026-01-17 00:02:19.607515 | orchestrator | + source_type = "volume" 2026-01-17 00:02:19.607518 | orchestrator | + uuid = (known after apply) 2026-01-17 00:02:19.607528 | orchestrator | } 2026-01-17 00:02:19.607536 | orchestrator | 2026-01-17 00:02:19.607541 | orchestrator | + network { 2026-01-17 00:02:19.607547 | orchestrator | + access_network = false 2026-01-17 00:02:19.607553 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-01-17 00:02:19.607559 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-01-17 00:02:19.607565 | orchestrator | + mac = (known after apply) 2026-01-17 00:02:19.607571 | orchestrator | + name = (known after apply) 2026-01-17 00:02:19.607577 | orchestrator | + port = (known after apply) 2026-01-17 00:02:19.607583 | orchestrator | + uuid = (known after apply) 2026-01-17 00:02:19.607589 | orchestrator | } 2026-01-17 00:02:19.607597 | orchestrator | } 2026-01-17 00:02:19.607675 | orchestrator | 2026-01-17 00:02:19.607681 | orchestrator | # openstack_compute_instance_v2.node_server[0] will be created 2026-01-17 00:02:19.607685 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-01-17 00:02:19.607689 | orchestrator | + access_ip_v4 = (known after apply) 2026-01-17 00:02:19.607693 | orchestrator | + access_ip_v6 = (known after apply) 2026-01-17 00:02:19.607696 | orchestrator | + all_metadata = (known after apply) 2026-01-17 00:02:19.607700 | orchestrator | + all_tags = (known after apply) 2026-01-17 00:02:19.607704 | orchestrator | + availability_zone = "nova" 2026-01-17 00:02:19.607707 | orchestrator | + config_drive = true 2026-01-17 00:02:19.607711 | orchestrator | + created = (known after apply) 2026-01-17 00:02:19.607715 | orchestrator | + flavor_id = (known after apply) 2026-01-17 00:02:19.607719 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-01-17 00:02:19.607722 | orchestrator | + force_delete = false 2026-01-17 00:02:19.607726 | orchestrator | + hypervisor_hostname = (known after apply) 2026-01-17 00:02:19.607730 | orchestrator | + id = (known after apply) 2026-01-17 00:02:19.607733 | orchestrator | + image_id = (known after apply) 2026-01-17 00:02:19.607737 | orchestrator | + image_name = (known after apply) 2026-01-17 00:02:19.607741 | orchestrator | + key_pair = "testbed" 2026-01-17 00:02:19.607745 | orchestrator | + name = "testbed-node-0" 2026-01-17 00:02:19.607748 | orchestrator | + power_state = "active" 2026-01-17 00:02:19.607752 | orchestrator | + region = (known after apply) 2026-01-17 00:02:19.607756 | orchestrator | + security_groups = (known after apply) 2026-01-17 00:02:19.607759 | orchestrator | + stop_before_destroy = false 2026-01-17 00:02:19.607763 | orchestrator | + updated = (known after apply) 2026-01-17 00:02:19.607767 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-01-17 00:02:19.607770 | orchestrator | 2026-01-17 00:02:19.607774 | orchestrator | + block_device { 2026-01-17 00:02:19.607778 | orchestrator | + boot_index = 0 2026-01-17 00:02:19.607796 | orchestrator | + delete_on_termination = false 2026-01-17 00:02:19.607800 | orchestrator | + destination_type = "volume" 2026-01-17 00:02:19.607804 | orchestrator | + multiattach = false 2026-01-17 00:02:19.607808 | orchestrator | + source_type = "volume" 2026-01-17 00:02:19.607811 | orchestrator | + uuid = (known after apply) 2026-01-17 00:02:19.607815 | orchestrator | } 2026-01-17 00:02:19.607819 | orchestrator | 2026-01-17 00:02:19.607823 | orchestrator | + network { 2026-01-17 00:02:19.607827 | orchestrator | + access_network = false 2026-01-17 00:02:19.607830 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-01-17 00:02:19.607834 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-01-17 00:02:19.607838 | orchestrator | + mac = (known after apply) 2026-01-17 00:02:19.607842 | orchestrator | + name = (known after apply) 2026-01-17 00:02:19.607846 | orchestrator | + port = (known after apply) 2026-01-17 00:02:19.607849 | orchestrator | + uuid = (known after apply) 2026-01-17 00:02:19.607853 | orchestrator | } 2026-01-17 00:02:19.607857 | orchestrator | } 2026-01-17 00:02:19.607898 | orchestrator | 2026-01-17 00:02:19.607902 | orchestrator | # openstack_compute_instance_v2.node_server[1] will be created 2026-01-17 00:02:19.607906 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-01-17 00:02:19.607910 | orchestrator | + access_ip_v4 = (known after apply) 2026-01-17 00:02:19.607919 | orchestrator | + access_ip_v6 = (known after apply) 2026-01-17 00:02:19.607923 | orchestrator | + all_metadata = (known after apply) 2026-01-17 00:02:19.607926 | orchestrator | + all_tags = (known after apply) 2026-01-17 00:02:19.607930 | orchestrator | + availability_zone = "nova" 2026-01-17 00:02:19.607934 | orchestrator | + config_drive = true 2026-01-17 00:02:19.607938 | orchestrator | + created = (known after apply) 2026-01-17 00:02:19.607941 | orchestrator | + flavor_id = (known after apply) 2026-01-17 00:02:19.607945 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-01-17 00:02:19.607949 | orchestrator | + force_delete = false 2026-01-17 00:02:19.607952 | orchestrator | + hypervisor_hostname = (known after apply) 2026-01-17 00:02:19.607956 | orchestrator | + id = (known after apply) 2026-01-17 00:02:19.607960 | orchestrator | + image_id = (known after apply) 2026-01-17 00:02:19.607964 | orchestrator | + image_name = (known after apply) 2026-01-17 00:02:19.607967 | orchestrator | + key_pair = "testbed" 2026-01-17 00:02:19.607971 | orchestrator | + name = "testbed-node-1" 2026-01-17 00:02:19.607975 | orchestrator | + power_state = "active" 2026-01-17 00:02:19.607978 | orchestrator | + region = (known after apply) 2026-01-17 00:02:19.607982 | orchestrator | + security_groups = (known after apply) 2026-01-17 00:02:19.607986 | orchestrator | + stop_before_destroy = false 2026-01-17 00:02:19.607990 | orchestrator | + updated = (known after apply) 2026-01-17 00:02:19.607993 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-01-17 00:02:19.607997 | orchestrator | 2026-01-17 00:02:19.608001 | orchestrator | + block_device { 2026-01-17 00:02:19.608004 | orchestrator | + boot_index = 0 2026-01-17 00:02:19.608008 | orchestrator | + delete_on_termination = false 2026-01-17 00:02:19.608012 | orchestrator | + destination_type = "volume" 2026-01-17 00:02:19.608016 | orchestrator | + multiattach = false 2026-01-17 00:02:19.608019 | orchestrator | + source_type = "volume" 2026-01-17 00:02:19.608023 | orchestrator | + uuid = (known after apply) 2026-01-17 00:02:19.608027 | orchestrator | } 2026-01-17 00:02:19.608030 | orchestrator | 2026-01-17 00:02:19.608034 | orchestrator | + network { 2026-01-17 00:02:19.608038 | orchestrator | + access_network = false 2026-01-17 00:02:19.608042 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-01-17 00:02:19.608045 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-01-17 00:02:19.608049 | orchestrator | + mac = (known after apply) 2026-01-17 00:02:19.608053 | orchestrator | + name = (known after apply) 2026-01-17 00:02:19.608057 | orchestrator | + port = (known after apply) 2026-01-17 00:02:19.608060 | orchestrator | + uuid = (known after apply) 2026-01-17 00:02:19.608064 | orchestrator | } 2026-01-17 00:02:19.608068 | orchestrator | } 2026-01-17 00:02:19.608128 | orchestrator | 2026-01-17 00:02:19.608132 | orchestrator | # openstack_compute_instance_v2.node_server[2] will be created 2026-01-17 00:02:19.608136 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-01-17 00:02:19.608140 | orchestrator | + access_ip_v4 = (known after apply) 2026-01-17 00:02:19.608144 | orchestrator | + access_ip_v6 = (known after apply) 2026-01-17 00:02:19.608149 | orchestrator | + all_metadata = (known after apply) 2026-01-17 00:02:19.608153 | orchestrator | + all_tags = (known after apply) 2026-01-17 00:02:19.608160 | orchestrator | + availability_zone = "nova" 2026-01-17 00:02:19.608164 | orchestrator | + config_drive = true 2026-01-17 00:02:19.608168 | orchestrator | + created = (known after apply) 2026-01-17 00:02:19.608172 | orchestrator | + flavor_id = (known after apply) 2026-01-17 00:02:19.608175 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-01-17 00:02:19.608179 | orchestrator | + force_delete = false 2026-01-17 00:02:19.608183 | orchestrator | + hypervisor_hostname = (known after apply) 2026-01-17 00:02:19.608186 | orchestrator | + id = (known after apply) 2026-01-17 00:02:19.608190 | orchestrator | + image_id = (known after apply) 2026-01-17 00:02:19.608197 | orchestrator | + image_name = (known after apply) 2026-01-17 00:02:19.608201 | orchestrator | + key_pair = "testbed" 2026-01-17 00:02:19.608205 | orchestrator | + name = "testbed-node-2" 2026-01-17 00:02:19.608208 | orchestrator | + power_state = "active" 2026-01-17 00:02:19.608212 | orchestrator | + region = (known after apply) 2026-01-17 00:02:19.608216 | orchestrator | + security_groups = (known after apply) 2026-01-17 00:02:19.608219 | orchestrator | + stop_before_destroy = false 2026-01-17 00:02:19.608223 | orchestrator | + updated = (known after apply) 2026-01-17 00:02:19.608227 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-01-17 00:02:19.608230 | orchestrator | 2026-01-17 00:02:19.608236 | orchestrator | + block_device { 2026-01-17 00:02:19.608242 | orchestrator | + boot_index = 0 2026-01-17 00:02:19.608249 | orchestrator | + delete_on_termination = false 2026-01-17 00:02:19.608255 | orchestrator | + destination_type = "volume" 2026-01-17 00:02:19.608261 | orchestrator | + multiattach = false 2026-01-17 00:02:19.608266 | orchestrator | + source_type = "volume" 2026-01-17 00:02:19.608272 | orchestrator | + uuid = (known after apply) 2026-01-17 00:02:19.608278 | orchestrator | } 2026-01-17 00:02:19.608284 | orchestrator | 2026-01-17 00:02:19.608291 | orchestrator | + network { 2026-01-17 00:02:19.608297 | orchestrator | + access_network = false 2026-01-17 00:02:19.608303 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-01-17 00:02:19.608309 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-01-17 00:02:19.608315 | orchestrator | + mac = (known after apply) 2026-01-17 00:02:19.608321 | orchestrator | + name = (known after apply) 2026-01-17 00:02:19.608327 | orchestrator | + port = (known after apply) 2026-01-17 00:02:19.608333 | orchestrator | + uuid = (known after apply) 2026-01-17 00:02:19.608339 | orchestrator | } 2026-01-17 00:02:19.608343 | orchestrator | } 2026-01-17 00:02:19.608350 | orchestrator | 2026-01-17 00:02:19.608354 | orchestrator | # openstack_compute_instance_v2.node_server[3] will be created 2026-01-17 00:02:19.608357 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-01-17 00:02:19.608361 | orchestrator | + access_ip_v4 = (known after apply) 2026-01-17 00:02:19.608365 | orchestrator | + access_ip_v6 = (known after apply) 2026-01-17 00:02:19.608369 | orchestrator | + all_metadata = (known after apply) 2026-01-17 00:02:19.608372 | orchestrator | + all_tags = (known after apply) 2026-01-17 00:02:19.608376 | orchestrator | + availability_zone = "nova" 2026-01-17 00:02:19.608380 | orchestrator | + config_drive = true 2026-01-17 00:02:19.608383 | orchestrator | + created = (known after apply) 2026-01-17 00:02:19.608387 | orchestrator | + flavor_id = (known after apply) 2026-01-17 00:02:19.608391 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-01-17 00:02:19.608394 | orchestrator | + force_delete = false 2026-01-17 00:02:19.608398 | orchestrator | + hypervisor_hostname = (known after apply) 2026-01-17 00:02:19.608402 | orchestrator | + id = (known after apply) 2026-01-17 00:02:19.608406 | orchestrator | + image_id = (known after apply) 2026-01-17 00:02:19.608409 | orchestrator | + image_name = (known after apply) 2026-01-17 00:02:19.608413 | orchestrator | + key_pair = "testbed" 2026-01-17 00:02:19.608417 | orchestrator | + name = "testbed-node-3" 2026-01-17 00:02:19.608421 | orchestrator | + power_state = "active" 2026-01-17 00:02:19.608424 | orchestrator | + region = (known after apply) 2026-01-17 00:02:19.608428 | orchestrator | + security_groups = (known after apply) 2026-01-17 00:02:19.608432 | orchestrator | + stop_before_destroy = false 2026-01-17 00:02:19.608435 | orchestrator | + updated = (known after apply) 2026-01-17 00:02:19.608439 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-01-17 00:02:19.608443 | orchestrator | 2026-01-17 00:02:19.608446 | orchestrator | + block_device { 2026-01-17 00:02:19.608454 | orchestrator | + boot_index = 0 2026-01-17 00:02:19.608457 | orchestrator | + delete_on_termination = false 2026-01-17 00:02:19.608461 | orchestrator | + destination_type = "volume" 2026-01-17 00:02:19.608469 | orchestrator | + multiattach = false 2026-01-17 00:02:19.608473 | orchestrator | + source_type = "volume" 2026-01-17 00:02:19.608476 | orchestrator | + uuid = (known after apply) 2026-01-17 00:02:19.608480 | orchestrator | } 2026-01-17 00:02:19.608484 | orchestrator | 2026-01-17 00:02:19.608488 | orchestrator | + network { 2026-01-17 00:02:19.608491 | orchestrator | + access_network = false 2026-01-17 00:02:19.608495 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-01-17 00:02:19.608499 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-01-17 00:02:19.608502 | orchestrator | + mac = (known after apply) 2026-01-17 00:02:19.608506 | orchestrator | + name = (known after apply) 2026-01-17 00:02:19.608510 | orchestrator | + port = (known after apply) 2026-01-17 00:02:19.608514 | orchestrator | + uuid = (known after apply) 2026-01-17 00:02:19.608517 | orchestrator | } 2026-01-17 00:02:19.608521 | orchestrator | } 2026-01-17 00:02:19.608580 | orchestrator | 2026-01-17 00:02:19.608585 | orchestrator | # openstack_compute_instance_v2.node_server[4] will be created 2026-01-17 00:02:19.608589 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-01-17 00:02:19.608593 | orchestrator | + access_ip_v4 = (known after apply) 2026-01-17 00:02:19.608597 | orchestrator | + access_ip_v6 = (known after apply) 2026-01-17 00:02:19.608600 | orchestrator | + all_metadata = (known after apply) 2026-01-17 00:02:19.608604 | orchestrator | + all_tags = (known after apply) 2026-01-17 00:02:19.608608 | orchestrator | + availability_zone = "nova" 2026-01-17 00:02:19.608611 | orchestrator | + config_drive = true 2026-01-17 00:02:19.608615 | orchestrator | + created = (known after apply) 2026-01-17 00:02:19.608619 | orchestrator | + flavor_id = (known after apply) 2026-01-17 00:02:19.608622 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-01-17 00:02:19.608626 | orchestrator | + force_delete = false 2026-01-17 00:02:19.608630 | orchestrator | + hypervisor_hostname = (known after apply) 2026-01-17 00:02:19.608633 | orchestrator | + id = (known after apply) 2026-01-17 00:02:19.608637 | orchestrator | + image_id = (known after apply) 2026-01-17 00:02:19.608641 | orchestrator | + image_name = (known after apply) 2026-01-17 00:02:19.608645 | orchestrator | + key_pair = "testbed" 2026-01-17 00:02:19.608648 | orchestrator | + name = "testbed-node-4" 2026-01-17 00:02:19.608652 | orchestrator | + power_state = "active" 2026-01-17 00:02:19.608656 | orchestrator | + region = (known after apply) 2026-01-17 00:02:19.608660 | orchestrator | + security_groups = (known after apply) 2026-01-17 00:02:19.608663 | orchestrator | + stop_before_destroy = false 2026-01-17 00:02:19.608667 | orchestrator | + updated = (known after apply) 2026-01-17 00:02:19.608671 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-01-17 00:02:19.608674 | orchestrator | 2026-01-17 00:02:19.608678 | orchestrator | + block_device { 2026-01-17 00:02:19.608682 | orchestrator | + boot_index = 0 2026-01-17 00:02:19.608686 | orchestrator | + delete_on_termination = false 2026-01-17 00:02:19.608689 | orchestrator | + destination_type = "volume" 2026-01-17 00:02:19.608693 | orchestrator | + multiattach = false 2026-01-17 00:02:19.608697 | orchestrator | + source_type = "volume" 2026-01-17 00:02:19.608700 | orchestrator | + uuid = (known after apply) 2026-01-17 00:02:19.608704 | orchestrator | } 2026-01-17 00:02:19.608708 | orchestrator | 2026-01-17 00:02:19.608712 | orchestrator | + network { 2026-01-17 00:02:19.608715 | orchestrator | + access_network = false 2026-01-17 00:02:19.608719 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-01-17 00:02:19.608723 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-01-17 00:02:19.608727 | orchestrator | + mac = (known after apply) 2026-01-17 00:02:19.608730 | orchestrator | + name = (known after apply) 2026-01-17 00:02:19.608734 | orchestrator | + port = (known after apply) 2026-01-17 00:02:19.608738 | orchestrator | + uuid = (known after apply) 2026-01-17 00:02:19.608741 | orchestrator | } 2026-01-17 00:02:19.608745 | orchestrator | } 2026-01-17 00:02:19.608830 | orchestrator | 2026-01-17 00:02:19.608835 | orchestrator | # openstack_compute_instance_v2.node_server[5] will be created 2026-01-17 00:02:19.608839 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-01-17 00:02:19.608843 | orchestrator | + access_ip_v4 = (known after apply) 2026-01-17 00:02:19.608846 | orchestrator | + access_ip_v6 = (known after apply) 2026-01-17 00:02:19.608850 | orchestrator | + all_metadata = (known after apply) 2026-01-17 00:02:19.608854 | orchestrator | + all_tags = (known after apply) 2026-01-17 00:02:19.608857 | orchestrator | + availability_zone = "nova" 2026-01-17 00:02:19.608861 | orchestrator | + config_drive = true 2026-01-17 00:02:19.608865 | orchestrator | + created = (known after apply) 2026-01-17 00:02:19.608869 | orchestrator | + flavor_id = (known after apply) 2026-01-17 00:02:19.608872 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-01-17 00:02:19.608876 | orchestrator | + force_delete = false 2026-01-17 00:02:19.608883 | orchestrator | + hypervisor_hostname = (known after apply) 2026-01-17 00:02:19.608887 | orchestrator | + id = (known after apply) 2026-01-17 00:02:19.608890 | orchestrator | + image_id = (known after apply) 2026-01-17 00:02:19.608894 | orchestrator | + image_name = (known after apply) 2026-01-17 00:02:19.608898 | orchestrator | + key_pair = "testbed" 2026-01-17 00:02:19.608901 | orchestrator | + name = "testbed-node-5" 2026-01-17 00:02:19.608905 | orchestrator | + power_state = "active" 2026-01-17 00:02:19.608909 | orchestrator | + region = (known after apply) 2026-01-17 00:02:19.608912 | orchestrator | + security_groups = (known after apply) 2026-01-17 00:02:19.608916 | orchestrator | + stop_before_destroy = false 2026-01-17 00:02:19.608920 | orchestrator | + updated = (known after apply) 2026-01-17 00:02:19.608923 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-01-17 00:02:19.608927 | orchestrator | 2026-01-17 00:02:19.608931 | orchestrator | + block_device { 2026-01-17 00:02:19.608935 | orchestrator | + boot_index = 0 2026-01-17 00:02:19.608938 | orchestrator | + delete_on_termination = false 2026-01-17 00:02:19.608942 | orchestrator | + destination_type = "volume" 2026-01-17 00:02:19.608946 | orchestrator | + multiattach = false 2026-01-17 00:02:19.608949 | orchestrator | + source_type = "volume" 2026-01-17 00:02:19.608953 | orchestrator | + uuid = (known after apply) 2026-01-17 00:02:19.608957 | orchestrator | } 2026-01-17 00:02:19.608960 | orchestrator | 2026-01-17 00:02:19.608964 | orchestrator | + network { 2026-01-17 00:02:19.608968 | orchestrator | + access_network = false 2026-01-17 00:02:19.608972 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-01-17 00:02:19.608975 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-01-17 00:02:19.608979 | orchestrator | + mac = (known after apply) 2026-01-17 00:02:19.608983 | orchestrator | + name = (known after apply) 2026-01-17 00:02:19.608986 | orchestrator | + port = (known after apply) 2026-01-17 00:02:19.608990 | orchestrator | + uuid = (known after apply) 2026-01-17 00:02:19.608994 | orchestrator | } 2026-01-17 00:02:19.608997 | orchestrator | } 2026-01-17 00:02:19.609003 | orchestrator | 2026-01-17 00:02:19.609007 | orchestrator | # openstack_compute_keypair_v2.key will be created 2026-01-17 00:02:19.609016 | orchestrator | + resource "openstack_compute_keypair_v2" "key" { 2026-01-17 00:02:19.609020 | orchestrator | + fingerprint = (known after apply) 2026-01-17 00:02:19.609024 | orchestrator | + id = (known after apply) 2026-01-17 00:02:19.609027 | orchestrator | + name = "testbed" 2026-01-17 00:02:19.609031 | orchestrator | + private_key = (sensitive value) 2026-01-17 00:02:19.609035 | orchestrator | + public_key = (known after apply) 2026-01-17 00:02:19.609039 | orchestrator | + region = (known after apply) 2026-01-17 00:02:19.609042 | orchestrator | + user_id = (known after apply) 2026-01-17 00:02:19.609046 | orchestrator | } 2026-01-17 00:02:19.609050 | orchestrator | 2026-01-17 00:02:19.609054 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[0] will be created 2026-01-17 00:02:19.609057 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-01-17 00:02:19.609064 | orchestrator | + device = (known after apply) 2026-01-17 00:02:19.609068 | orchestrator | + id = (known after apply) 2026-01-17 00:02:19.609072 | orchestrator | + instance_id = (known after apply) 2026-01-17 00:02:19.609075 | orchestrator | + region = (known after apply) 2026-01-17 00:02:19.609079 | orchestrator | + volume_id = (known after apply) 2026-01-17 00:02:19.609083 | orchestrator | } 2026-01-17 00:02:19.609086 | orchestrator | 2026-01-17 00:02:19.609090 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[1] will be created 2026-01-17 00:02:19.609094 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-01-17 00:02:19.609098 | orchestrator | + device = (known after apply) 2026-01-17 00:02:19.609101 | orchestrator | + id = (known after apply) 2026-01-17 00:02:19.609105 | orchestrator | + instance_id = (known after apply) 2026-01-17 00:02:19.609109 | orchestrator | + region = (known after apply) 2026-01-17 00:02:19.609112 | orchestrator | + volume_id = (known after apply) 2026-01-17 00:02:19.609116 | orchestrator | } 2026-01-17 00:02:19.609120 | orchestrator | 2026-01-17 00:02:19.609123 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[2] will be created 2026-01-17 00:02:19.609127 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-01-17 00:02:19.609131 | orchestrator | + device = (known after apply) 2026-01-17 00:02:19.609135 | orchestrator | + id = (known after apply) 2026-01-17 00:02:19.609138 | orchestrator | + instance_id = (known after apply) 2026-01-17 00:02:19.609142 | orchestrator | + region = (known after apply) 2026-01-17 00:02:19.609146 | orchestrator | + volume_id = (known after apply) 2026-01-17 00:02:19.609149 | orchestrator | } 2026-01-17 00:02:19.609155 | orchestrator | 2026-01-17 00:02:19.609159 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[3] will be created 2026-01-17 00:02:19.609162 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-01-17 00:02:19.609166 | orchestrator | + device = (known after apply) 2026-01-17 00:02:19.609170 | orchestrator | + id = (known after apply) 2026-01-17 00:02:19.609173 | orchestrator | + instance_id = (known after apply) 2026-01-17 00:02:19.609177 | orchestrator | + region = (known after apply) 2026-01-17 00:02:19.609181 | orchestrator | + volume_id = (known after apply) 2026-01-17 00:02:19.609184 | orchestrator | } 2026-01-17 00:02:19.609188 | orchestrator | 2026-01-17 00:02:19.609192 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[4] will be created 2026-01-17 00:02:19.609196 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-01-17 00:02:19.609199 | orchestrator | + device = (known after apply) 2026-01-17 00:02:19.609203 | orchestrator | + id = (known after apply) 2026-01-17 00:02:19.609207 | orchestrator | + instance_id = (known after apply) 2026-01-17 00:02:19.609213 | orchestrator | + region = (known after apply) 2026-01-17 00:02:19.609217 | orchestrator | + volume_id = (known after apply) 2026-01-17 00:02:19.609220 | orchestrator | } 2026-01-17 00:02:19.619051 | orchestrator | 2026-01-17 00:02:19.619121 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[5] will be created 2026-01-17 00:02:19.619128 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-01-17 00:02:19.619133 | orchestrator | + device = (known after apply) 2026-01-17 00:02:19.619137 | orchestrator | + id = (known after apply) 2026-01-17 00:02:19.619141 | orchestrator | + instance_id = (known after apply) 2026-01-17 00:02:19.619145 | orchestrator | + region = (known after apply) 2026-01-17 00:02:19.619149 | orchestrator | + volume_id = (known after apply) 2026-01-17 00:02:19.619153 | orchestrator | } 2026-01-17 00:02:19.619157 | orchestrator | 2026-01-17 00:02:19.619178 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[6] will be created 2026-01-17 00:02:19.619182 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-01-17 00:02:19.619186 | orchestrator | + device = (known after apply) 2026-01-17 00:02:19.619190 | orchestrator | + id = (known after apply) 2026-01-17 00:02:19.619194 | orchestrator | + instance_id = (known after apply) 2026-01-17 00:02:19.619197 | orchestrator | + region = (known after apply) 2026-01-17 00:02:19.619217 | orchestrator | + volume_id = (known after apply) 2026-01-17 00:02:19.619223 | orchestrator | } 2026-01-17 00:02:19.619229 | orchestrator | 2026-01-17 00:02:19.619233 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[7] will be created 2026-01-17 00:02:19.619236 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-01-17 00:02:19.619257 | orchestrator | + device = (known after apply) 2026-01-17 00:02:19.619261 | orchestrator | + id = (known after apply) 2026-01-17 00:02:19.619265 | orchestrator | + instance_id = (known after apply) 2026-01-17 00:02:19.619269 | orchestrator | + region = (known after apply) 2026-01-17 00:02:19.619273 | orchestrator | + volume_id = (known after apply) 2026-01-17 00:02:19.619277 | orchestrator | } 2026-01-17 00:02:19.619281 | orchestrator | 2026-01-17 00:02:19.619284 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[8] will be created 2026-01-17 00:02:19.619288 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-01-17 00:02:19.619292 | orchestrator | + device = (known after apply) 2026-01-17 00:02:19.619296 | orchestrator | + id = (known after apply) 2026-01-17 00:02:19.619300 | orchestrator | + instance_id = (known after apply) 2026-01-17 00:02:19.619303 | orchestrator | + region = (known after apply) 2026-01-17 00:02:19.619307 | orchestrator | + volume_id = (known after apply) 2026-01-17 00:02:19.619311 | orchestrator | } 2026-01-17 00:02:19.619315 | orchestrator | 2026-01-17 00:02:19.619476 | orchestrator | # openstack_networking_floatingip_associate_v2.manager_floating_ip_association will be created 2026-01-17 00:02:19.619483 | orchestrator | + resource "openstack_networking_floatingip_associate_v2" "manager_floating_ip_association" { 2026-01-17 00:02:19.619487 | orchestrator | + fixed_ip = (known after apply) 2026-01-17 00:02:19.619490 | orchestrator | + floating_ip = (known after apply) 2026-01-17 00:02:19.619494 | orchestrator | + id = (known after apply) 2026-01-17 00:02:19.619498 | orchestrator | + port_id = (known after apply) 2026-01-17 00:02:19.619502 | orchestrator | + region = (known after apply) 2026-01-17 00:02:19.619505 | orchestrator | } 2026-01-17 00:02:19.619509 | orchestrator | 2026-01-17 00:02:19.619513 | orchestrator | # openstack_networking_floatingip_v2.manager_floating_ip will be created 2026-01-17 00:02:19.619517 | orchestrator | + resource "openstack_networking_floatingip_v2" "manager_floating_ip" { 2026-01-17 00:02:19.619537 | orchestrator | + address = (known after apply) 2026-01-17 00:02:19.619542 | orchestrator | + all_tags = (known after apply) 2026-01-17 00:02:19.619545 | orchestrator | + dns_domain = (known after apply) 2026-01-17 00:02:19.619549 | orchestrator | + dns_name = (known after apply) 2026-01-17 00:02:19.619553 | orchestrator | + fixed_ip = (known after apply) 2026-01-17 00:02:19.619557 | orchestrator | + id = (known after apply) 2026-01-17 00:02:19.619560 | orchestrator | + pool = "public" 2026-01-17 00:02:19.619565 | orchestrator | + port_id = (known after apply) 2026-01-17 00:02:19.619569 | orchestrator | + region = (known after apply) 2026-01-17 00:02:19.619573 | orchestrator | + subnet_id = (known after apply) 2026-01-17 00:02:19.619576 | orchestrator | + tenant_id = (known after apply) 2026-01-17 00:02:19.619580 | orchestrator | } 2026-01-17 00:02:19.619584 | orchestrator | 2026-01-17 00:02:19.619588 | orchestrator | # openstack_networking_network_v2.net_management will be created 2026-01-17 00:02:19.619591 | orchestrator | + resource "openstack_networking_network_v2" "net_management" { 2026-01-17 00:02:19.619595 | orchestrator | + admin_state_up = (known after apply) 2026-01-17 00:02:19.619614 | orchestrator | + all_tags = (known after apply) 2026-01-17 00:02:19.619618 | orchestrator | + availability_zone_hints = [ 2026-01-17 00:02:19.619622 | orchestrator | + "nova", 2026-01-17 00:02:19.619626 | orchestrator | ] 2026-01-17 00:02:19.619630 | orchestrator | + dns_domain = (known after apply) 2026-01-17 00:02:19.619634 | orchestrator | + external = (known after apply) 2026-01-17 00:02:19.619637 | orchestrator | + id = (known after apply) 2026-01-17 00:02:19.619641 | orchestrator | + mtu = (known after apply) 2026-01-17 00:02:19.619645 | orchestrator | + name = "net-testbed-management" 2026-01-17 00:02:19.619649 | orchestrator | + port_security_enabled = (known after apply) 2026-01-17 00:02:19.619658 | orchestrator | + qos_policy_id = (known after apply) 2026-01-17 00:02:19.619663 | orchestrator | + region = (known after apply) 2026-01-17 00:02:19.619667 | orchestrator | + shared = (known after apply) 2026-01-17 00:02:19.619671 | orchestrator | + tenant_id = (known after apply) 2026-01-17 00:02:19.619675 | orchestrator | + transparent_vlan = (known after apply) 2026-01-17 00:02:19.619693 | orchestrator | 2026-01-17 00:02:19.619697 | orchestrator | + segments (known after apply) 2026-01-17 00:02:19.619701 | orchestrator | } 2026-01-17 00:02:19.619705 | orchestrator | 2026-01-17 00:02:19.619709 | orchestrator | # openstack_networking_port_v2.manager_port_management will be created 2026-01-17 00:02:19.619712 | orchestrator | + resource "openstack_networking_port_v2" "manager_port_management" { 2026-01-17 00:02:19.619716 | orchestrator | + admin_state_up = (known after apply) 2026-01-17 00:02:19.619720 | orchestrator | + all_fixed_ips = (known after apply) 2026-01-17 00:02:19.619724 | orchestrator | + all_security_group_ids = (known after apply) 2026-01-17 00:02:19.619734 | orchestrator | + all_tags = (known after apply) 2026-01-17 00:02:19.619738 | orchestrator | + device_id = (known after apply) 2026-01-17 00:02:19.619742 | orchestrator | + device_owner = (known after apply) 2026-01-17 00:02:19.619746 | orchestrator | + dns_assignment = (known after apply) 2026-01-17 00:02:19.619749 | orchestrator | + dns_name = (known after apply) 2026-01-17 00:02:19.619753 | orchestrator | + id = (known after apply) 2026-01-17 00:02:19.619798 | orchestrator | + mac_address = (known after apply) 2026-01-17 00:02:19.619803 | orchestrator | + network_id = (known after apply) 2026-01-17 00:02:19.619807 | orchestrator | + port_security_enabled = (known after apply) 2026-01-17 00:02:19.619810 | orchestrator | + qos_policy_id = (known after apply) 2026-01-17 00:02:19.619814 | orchestrator | + region = (known after apply) 2026-01-17 00:02:19.619818 | orchestrator | + security_group_ids = (known after apply) 2026-01-17 00:02:19.619822 | orchestrator | + tenant_id = (known after apply) 2026-01-17 00:02:19.619825 | orchestrator | 2026-01-17 00:02:19.619829 | orchestrator | + allowed_address_pairs { 2026-01-17 00:02:19.619848 | orchestrator | + ip_address = "192.168.16.8/32" 2026-01-17 00:02:19.619853 | orchestrator | } 2026-01-17 00:02:19.619857 | orchestrator | 2026-01-17 00:02:19.619861 | orchestrator | + binding (known after apply) 2026-01-17 00:02:19.619864 | orchestrator | 2026-01-17 00:02:19.619868 | orchestrator | + fixed_ip { 2026-01-17 00:02:19.619872 | orchestrator | + ip_address = "192.168.16.5" 2026-01-17 00:02:19.619876 | orchestrator | + subnet_id = (known after apply) 2026-01-17 00:02:19.619880 | orchestrator | } 2026-01-17 00:02:19.619884 | orchestrator | } 2026-01-17 00:02:19.619887 | orchestrator | 2026-01-17 00:02:19.619891 | orchestrator | # openstack_networking_port_v2.node_port_management[0] will be created 2026-01-17 00:02:19.619895 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-01-17 00:02:19.619899 | orchestrator | + admin_state_up = (known after apply) 2026-01-17 00:02:19.619902 | orchestrator | + all_fixed_ips = (known after apply) 2026-01-17 00:02:19.619906 | orchestrator | + all_security_group_ids = (known after apply) 2026-01-17 00:02:19.619910 | orchestrator | + all_tags = (known after apply) 2026-01-17 00:02:19.619929 | orchestrator | + device_id = (known after apply) 2026-01-17 00:02:19.619933 | orchestrator | + device_owner = (known after apply) 2026-01-17 00:02:19.619936 | orchestrator | + dns_assignment = (known after apply) 2026-01-17 00:02:19.619940 | orchestrator | + dns_name = (known after apply) 2026-01-17 00:02:19.619944 | orchestrator | + id = (known after apply) 2026-01-17 00:02:19.619948 | orchestrator | + mac_address = (known after apply) 2026-01-17 00:02:19.619951 | orchestrator | + network_id = (known after apply) 2026-01-17 00:02:19.619955 | orchestrator | + port_security_enabled = (known after apply) 2026-01-17 00:02:19.619959 | orchestrator | + qos_policy_id = (known after apply) 2026-01-17 00:02:19.619963 | orchestrator | + region = (known after apply) 2026-01-17 00:02:19.619970 | orchestrator | + security_group_ids = (known after apply) 2026-01-17 00:02:19.619974 | orchestrator | + tenant_id = (known after apply) 2026-01-17 00:02:19.619977 | orchestrator | 2026-01-17 00:02:19.619981 | orchestrator | + allowed_address_pairs { 2026-01-17 00:02:19.619985 | orchestrator | + ip_address = "192.168.16.254/32" 2026-01-17 00:02:19.620004 | orchestrator | } 2026-01-17 00:02:19.620008 | orchestrator | + allowed_address_pairs { 2026-01-17 00:02:19.620012 | orchestrator | + ip_address = "192.168.16.8/32" 2026-01-17 00:02:19.620015 | orchestrator | } 2026-01-17 00:02:19.620019 | orchestrator | + allowed_address_pairs { 2026-01-17 00:02:19.620023 | orchestrator | + ip_address = "192.168.16.9/32" 2026-01-17 00:02:19.620030 | orchestrator | } 2026-01-17 00:02:19.620034 | orchestrator | 2026-01-17 00:02:19.620037 | orchestrator | + binding (known after apply) 2026-01-17 00:02:19.620041 | orchestrator | 2026-01-17 00:02:19.620045 | orchestrator | + fixed_ip { 2026-01-17 00:02:19.620049 | orchestrator | + ip_address = "192.168.16.10" 2026-01-17 00:02:19.620052 | orchestrator | + subnet_id = (known after apply) 2026-01-17 00:02:19.620056 | orchestrator | } 2026-01-17 00:02:19.620060 | orchestrator | } 2026-01-17 00:02:19.620064 | orchestrator | 2026-01-17 00:02:19.620091 | orchestrator | # openstack_networking_port_v2.node_port_management[1] will be created 2026-01-17 00:02:19.620095 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-01-17 00:02:19.620099 | orchestrator | + admin_state_up = (known after apply) 2026-01-17 00:02:19.620103 | orchestrator | + all_fixed_ips = (known after apply) 2026-01-17 00:02:19.620106 | orchestrator | + all_security_group_ids = (known after apply) 2026-01-17 00:02:19.620110 | orchestrator | + all_tags = (known after apply) 2026-01-17 00:02:19.620114 | orchestrator | + device_id = (known after apply) 2026-01-17 00:02:19.620118 | orchestrator | + device_owner = (known after apply) 2026-01-17 00:02:19.620121 | orchestrator | + dns_assignment = (known after apply) 2026-01-17 00:02:19.620125 | orchestrator | + dns_name = (known after apply) 2026-01-17 00:02:19.620129 | orchestrator | + id = (known after apply) 2026-01-17 00:02:19.620133 | orchestrator | + mac_address = (known after apply) 2026-01-17 00:02:19.620139 | orchestrator | + network_id = (known after apply) 2026-01-17 00:02:19.620143 | orchestrator | + port_security_enabled = (known after apply) 2026-01-17 00:02:19.620204 | orchestrator | + qos_policy_id = (known after apply) 2026-01-17 00:02:19.620208 | orchestrator | + region = (known after apply) 2026-01-17 00:02:19.620212 | orchestrator | + security_group_ids = (known after apply) 2026-01-17 00:02:19.620216 | orchestrator | + tenant_id = (known after apply) 2026-01-17 00:02:19.620220 | orchestrator | 2026-01-17 00:02:19.620240 | orchestrator | + allowed_address_pairs { 2026-01-17 00:02:19.620245 | orchestrator | + ip_address = "192.168.16.254/32" 2026-01-17 00:02:19.620249 | orchestrator | } 2026-01-17 00:02:19.620253 | orchestrator | + allowed_address_pairs { 2026-01-17 00:02:19.620256 | orchestrator | + ip_address = "192.168.16.8/32" 2026-01-17 00:02:19.620260 | orchestrator | } 2026-01-17 00:02:19.620264 | orchestrator | + allowed_address_pairs { 2026-01-17 00:02:19.620268 | orchestrator | + ip_address = "192.168.16.9/32" 2026-01-17 00:02:19.620271 | orchestrator | } 2026-01-17 00:02:19.620275 | orchestrator | 2026-01-17 00:02:19.620279 | orchestrator | + binding (known after apply) 2026-01-17 00:02:19.620283 | orchestrator | 2026-01-17 00:02:19.620286 | orchestrator | + fixed_ip { 2026-01-17 00:02:19.620290 | orchestrator | + ip_address = "192.168.16.11" 2026-01-17 00:02:19.620294 | orchestrator | + subnet_id = (known after apply) 2026-01-17 00:02:19.620298 | orchestrator | } 2026-01-17 00:02:19.620315 | orchestrator | } 2026-01-17 00:02:19.620319 | orchestrator | 2026-01-17 00:02:19.620323 | orchestrator | # openstack_networking_port_v2.node_port_management[2] will be created 2026-01-17 00:02:19.620327 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-01-17 00:02:19.620331 | orchestrator | + admin_state_up = (known after apply) 2026-01-17 00:02:19.620335 | orchestrator | + all_fixed_ips = (known after apply) 2026-01-17 00:02:19.620338 | orchestrator | + all_security_group_ids = (known after apply) 2026-01-17 00:02:19.620342 | orchestrator | + all_tags = (known after apply) 2026-01-17 00:02:19.620350 | orchestrator | + device_id = (known after apply) 2026-01-17 00:02:19.620354 | orchestrator | + device_owner = (known after apply) 2026-01-17 00:02:19.620357 | orchestrator | + dns_assignment = (known after apply) 2026-01-17 00:02:19.620361 | orchestrator | + dns_name = (known after apply) 2026-01-17 00:02:19.620368 | orchestrator | + id = (known after apply) 2026-01-17 00:02:19.620372 | orchestrator | + mac_address = (known after apply) 2026-01-17 00:02:19.620396 | orchestrator | + network_id = (known after apply) 2026-01-17 00:02:19.620400 | orchestrator | + port_security_enabled = (known after apply) 2026-01-17 00:02:19.620404 | orchestrator | + qos_policy_id = (known after apply) 2026-01-17 00:02:19.620408 | orchestrator | + region = (known after apply) 2026-01-17 00:02:19.620411 | orchestrator | + security_group_ids = (known after apply) 2026-01-17 00:02:19.620415 | orchestrator | + tenant_id = (known after apply) 2026-01-17 00:02:19.620419 | orchestrator | 2026-01-17 00:02:19.620423 | orchestrator | + allowed_address_pairs { 2026-01-17 00:02:19.620426 | orchestrator | + ip_address = "192.168.16.254/32" 2026-01-17 00:02:19.620430 | orchestrator | } 2026-01-17 00:02:19.620434 | orchestrator | + allowed_address_pairs { 2026-01-17 00:02:19.620438 | orchestrator | + ip_address = "192.168.16.8/32" 2026-01-17 00:02:19.620441 | orchestrator | } 2026-01-17 00:02:19.620445 | orchestrator | + allowed_address_pairs { 2026-01-17 00:02:19.620449 | orchestrator | + ip_address = "192.168.16.9/32" 2026-01-17 00:02:19.620453 | orchestrator | } 2026-01-17 00:02:19.620456 | orchestrator | 2026-01-17 00:02:19.620475 | orchestrator | + binding (known after apply) 2026-01-17 00:02:19.620479 | orchestrator | 2026-01-17 00:02:19.620482 | orchestrator | + fixed_ip { 2026-01-17 00:02:19.620486 | orchestrator | + ip_address = "192.168.16.12" 2026-01-17 00:02:19.620490 | orchestrator | + subnet_id = (known after apply) 2026-01-17 00:02:19.620493 | orchestrator | } 2026-01-17 00:02:19.620497 | orchestrator | } 2026-01-17 00:02:19.620501 | orchestrator | 2026-01-17 00:02:19.620505 | orchestrator | # openstack_networking_port_v2.node_port_management[3] will be created 2026-01-17 00:02:19.620508 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-01-17 00:02:19.620512 | orchestrator | + admin_state_up = (known after apply) 2026-01-17 00:02:19.620516 | orchestrator | + all_fixed_ips = (known after apply) 2026-01-17 00:02:19.620520 | orchestrator | + all_security_group_ids = (known after apply) 2026-01-17 00:02:19.620523 | orchestrator | + all_tags = (known after apply) 2026-01-17 00:02:19.620527 | orchestrator | + device_id = (known after apply) 2026-01-17 00:02:19.620531 | orchestrator | + device_owner = (known after apply) 2026-01-17 00:02:19.620550 | orchestrator | + dns_assignment = (known after apply) 2026-01-17 00:02:19.620554 | orchestrator | + dns_name = (known after apply) 2026-01-17 00:02:19.620558 | orchestrator | + id = (known after apply) 2026-01-17 00:02:19.620562 | orchestrator | + mac_address = (known after apply) 2026-01-17 00:02:19.620565 | orchestrator | + network_id = (known after apply) 2026-01-17 00:02:19.620569 | orchestrator | + port_security_enabled = (known after apply) 2026-01-17 00:02:19.620573 | orchestrator | + qos_policy_id = (known after apply) 2026-01-17 00:02:19.620577 | orchestrator | + region = (known after apply) 2026-01-17 00:02:19.620580 | orchestrator | + security_group_ids = (known after apply) 2026-01-17 00:02:19.620584 | orchestrator | + tenant_id = (known after apply) 2026-01-17 00:02:19.620588 | orchestrator | 2026-01-17 00:02:19.620592 | orchestrator | + allowed_address_pairs { 2026-01-17 00:02:19.620595 | orchestrator | + ip_address = "192.168.16.254/32" 2026-01-17 00:02:19.620599 | orchestrator | } 2026-01-17 00:02:19.620603 | orchestrator | + allowed_address_pairs { 2026-01-17 00:02:19.620607 | orchestrator | + ip_address = "192.168.16.8/32" 2026-01-17 00:02:19.620611 | orchestrator | } 2026-01-17 00:02:19.620708 | orchestrator | + allowed_address_pairs { 2026-01-17 00:02:19.620713 | orchestrator | + ip_address = "192.168.16.9/32" 2026-01-17 00:02:19.620717 | orchestrator | } 2026-01-17 00:02:19.620721 | orchestrator | 2026-01-17 00:02:19.620729 | orchestrator | + binding (known after apply) 2026-01-17 00:02:19.620732 | orchestrator | 2026-01-17 00:02:19.620736 | orchestrator | + fixed_ip { 2026-01-17 00:02:19.620740 | orchestrator | + ip_address = "192.168.16.13" 2026-01-17 00:02:19.620744 | orchestrator | + subnet_id = (known after apply) 2026-01-17 00:02:19.620747 | orchestrator | } 2026-01-17 00:02:19.620751 | orchestrator | } 2026-01-17 00:02:19.620755 | orchestrator | 2026-01-17 00:02:19.620759 | orchestrator | # openstack_networking_port_v2.node_port_management[4] will be created 2026-01-17 00:02:19.620763 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-01-17 00:02:19.620766 | orchestrator | + admin_state_up = (known after apply) 2026-01-17 00:02:19.620770 | orchestrator | + all_fixed_ips = (known after apply) 2026-01-17 00:02:19.620821 | orchestrator | + all_security_group_ids = (known after apply) 2026-01-17 00:02:19.620826 | orchestrator | + all_tags = (known after apply) 2026-01-17 00:02:19.620829 | orchestrator | + device_id = (known after apply) 2026-01-17 00:02:19.620833 | orchestrator | + device_owner = (known after apply) 2026-01-17 00:02:19.620837 | orchestrator | + dns_assignment = (known after apply) 2026-01-17 00:02:19.620841 | orchestrator | + dns_name = (known after apply) 2026-01-17 00:02:19.620844 | orchestrator | + id = (known after apply) 2026-01-17 00:02:19.620848 | orchestrator | + mac_address = (known after apply) 2026-01-17 00:02:19.620852 | orchestrator | + network_id = (known after apply) 2026-01-17 00:02:19.620856 | orchestrator | + port_security_enabled = (known after apply) 2026-01-17 00:02:19.620859 | orchestrator | + qos_policy_id = (known after apply) 2026-01-17 00:02:19.620863 | orchestrator | + region = (known after apply) 2026-01-17 00:02:19.620867 | orchestrator | + security_group_ids = (known after apply) 2026-01-17 00:02:19.620871 | orchestrator | + tenant_id = (known after apply) 2026-01-17 00:02:19.620877 | orchestrator | 2026-01-17 00:02:19.620896 | orchestrator | + allowed_address_pairs { 2026-01-17 00:02:19.620900 | orchestrator | + ip_address = "192.168.16.254/32" 2026-01-17 00:02:19.620904 | orchestrator | } 2026-01-17 00:02:19.620907 | orchestrator | + allowed_address_pairs { 2026-01-17 00:02:19.620911 | orchestrator | + ip_address = "192.168.16.8/32" 2026-01-17 00:02:19.620915 | orchestrator | } 2026-01-17 00:02:19.620919 | orchestrator | + allowed_address_pairs { 2026-01-17 00:02:19.620922 | orchestrator | + ip_address = "192.168.16.9/32" 2026-01-17 00:02:19.620926 | orchestrator | } 2026-01-17 00:02:19.620930 | orchestrator | 2026-01-17 00:02:19.620934 | orchestrator | + binding (known after apply) 2026-01-17 00:02:19.620937 | orchestrator | 2026-01-17 00:02:19.620941 | orchestrator | + fixed_ip { 2026-01-17 00:02:19.620945 | orchestrator | + ip_address = "192.168.16.14" 2026-01-17 00:02:19.620949 | orchestrator | + subnet_id = (known after apply) 2026-01-17 00:02:19.620953 | orchestrator | } 2026-01-17 00:02:19.620956 | orchestrator | } 2026-01-17 00:02:19.620978 | orchestrator | 2026-01-17 00:02:19.620982 | orchestrator | # openstack_networking_port_v2.node_port_management[5] will be created 2026-01-17 00:02:19.620985 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-01-17 00:02:19.620989 | orchestrator | + admin_state_up = (known after apply) 2026-01-17 00:02:19.620993 | orchestrator | + all_fixed_ips = (known after apply) 2026-01-17 00:02:19.620997 | orchestrator | + all_security_group_ids = (known after apply) 2026-01-17 00:02:19.621001 | orchestrator | + all_tags = (known after apply) 2026-01-17 00:02:19.621004 | orchestrator | + device_id = (known after apply) 2026-01-17 00:02:19.621008 | orchestrator | + device_owner = (known after apply) 2026-01-17 00:02:19.621012 | orchestrator | + dns_assignment = (known after apply) 2026-01-17 00:02:19.621020 | orchestrator | + dns_name = (known after apply) 2026-01-17 00:02:19.621024 | orchestrator | + id = (known after apply) 2026-01-17 00:02:19.621028 | orchestrator | + mac_address = (known after apply) 2026-01-17 00:02:19.621031 | orchestrator | + network_id = (known after apply) 2026-01-17 00:02:19.621050 | orchestrator | + port_security_enabled = (known after apply) 2026-01-17 00:02:19.621056 | orchestrator | + qos_policy_id = (known after apply) 2026-01-17 00:02:19.621064 | orchestrator | + region = (known after apply) 2026-01-17 00:02:19.621068 | orchestrator | + security_group_ids = (known after apply) 2026-01-17 00:02:19.621072 | orchestrator | + tenant_id = (known after apply) 2026-01-17 00:02:19.621075 | orchestrator | 2026-01-17 00:02:19.621079 | orchestrator | + allowed_address_pairs { 2026-01-17 00:02:19.621083 | orchestrator | + ip_address = "192.168.16.254/32" 2026-01-17 00:02:19.621087 | orchestrator | } 2026-01-17 00:02:19.621091 | orchestrator | + allowed_address_pairs { 2026-01-17 00:02:19.621094 | orchestrator | + ip_address = "192.168.16.8/32" 2026-01-17 00:02:19.621098 | orchestrator | } 2026-01-17 00:02:19.621102 | orchestrator | + allowed_address_pairs { 2026-01-17 00:02:19.621106 | orchestrator | + ip_address = "192.168.16.9/32" 2026-01-17 00:02:19.621109 | orchestrator | } 2026-01-17 00:02:19.621113 | orchestrator | 2026-01-17 00:02:19.621216 | orchestrator | + binding (known after apply) 2026-01-17 00:02:19.621220 | orchestrator | 2026-01-17 00:02:19.621224 | orchestrator | + fixed_ip { 2026-01-17 00:02:19.621228 | orchestrator | + ip_address = "192.168.16.15" 2026-01-17 00:02:19.621232 | orchestrator | + subnet_id = (known after apply) 2026-01-17 00:02:19.621235 | orchestrator | } 2026-01-17 00:02:19.621239 | orchestrator | } 2026-01-17 00:02:19.621243 | orchestrator | 2026-01-17 00:02:19.621247 | orchestrator | # openstack_networking_router_interface_v2.router_interface will be created 2026-01-17 00:02:19.621251 | orchestrator | + resource "openstack_networking_router_interface_v2" "router_interface" { 2026-01-17 00:02:19.621254 | orchestrator | + force_destroy = false 2026-01-17 00:02:19.621258 | orchestrator | + id = (known after apply) 2026-01-17 00:02:19.621262 | orchestrator | + port_id = (known after apply) 2026-01-17 00:02:19.621265 | orchestrator | + region = (known after apply) 2026-01-17 00:02:19.621269 | orchestrator | + router_id = (known after apply) 2026-01-17 00:02:19.621288 | orchestrator | + subnet_id = (known after apply) 2026-01-17 00:02:19.621292 | orchestrator | } 2026-01-17 00:02:19.621296 | orchestrator | 2026-01-17 00:02:19.621300 | orchestrator | # openstack_networking_router_v2.router will be created 2026-01-17 00:02:19.621303 | orchestrator | + resource "openstack_networking_router_v2" "router" { 2026-01-17 00:02:19.621307 | orchestrator | + admin_state_up = (known after apply) 2026-01-17 00:02:19.621311 | orchestrator | + all_tags = (known after apply) 2026-01-17 00:02:19.621314 | orchestrator | + availability_zone_hints = [ 2026-01-17 00:02:19.621318 | orchestrator | + "nova", 2026-01-17 00:02:19.621322 | orchestrator | ] 2026-01-17 00:02:19.621326 | orchestrator | + distributed = (known after apply) 2026-01-17 00:02:19.621329 | orchestrator | + enable_snat = (known after apply) 2026-01-17 00:02:19.621333 | orchestrator | + external_network_id = "e6be7364-bfd8-4de7-8120-8f41c69a139a" 2026-01-17 00:02:19.621337 | orchestrator | + external_qos_policy_id = (known after apply) 2026-01-17 00:02:19.621340 | orchestrator | + id = (known after apply) 2026-01-17 00:02:19.621344 | orchestrator | + name = "testbed" 2026-01-17 00:02:19.621362 | orchestrator | + region = (known after apply) 2026-01-17 00:02:19.621366 | orchestrator | + tenant_id = (known after apply) 2026-01-17 00:02:19.621370 | orchestrator | 2026-01-17 00:02:19.621374 | orchestrator | + external_fixed_ip (known after apply) 2026-01-17 00:02:19.621378 | orchestrator | } 2026-01-17 00:02:19.621382 | orchestrator | 2026-01-17 00:02:19.621386 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule1 will be created 2026-01-17 00:02:19.621390 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule1" { 2026-01-17 00:02:19.621394 | orchestrator | + description = "ssh" 2026-01-17 00:02:19.621398 | orchestrator | + direction = "ingress" 2026-01-17 00:02:19.621401 | orchestrator | + ethertype = "IPv4" 2026-01-17 00:02:19.621405 | orchestrator | + id = (known after apply) 2026-01-17 00:02:19.621409 | orchestrator | + port_range_max = 22 2026-01-17 00:02:19.621412 | orchestrator | + port_range_min = 22 2026-01-17 00:02:19.621416 | orchestrator | + protocol = "tcp" 2026-01-17 00:02:19.621420 | orchestrator | + region = (known after apply) 2026-01-17 00:02:19.621443 | orchestrator | + remote_address_group_id = (known after apply) 2026-01-17 00:02:19.621447 | orchestrator | + remote_group_id = (known after apply) 2026-01-17 00:02:19.621451 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-01-17 00:02:19.621455 | orchestrator | + security_group_id = (known after apply) 2026-01-17 00:02:19.621459 | orchestrator | + tenant_id = (known after apply) 2026-01-17 00:02:19.621463 | orchestrator | } 2026-01-17 00:02:19.621467 | orchestrator | 2026-01-17 00:02:19.621470 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule2 will be created 2026-01-17 00:02:19.621474 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule2" { 2026-01-17 00:02:19.621478 | orchestrator | + description = "wireguard" 2026-01-17 00:02:19.621482 | orchestrator | + direction = "ingress" 2026-01-17 00:02:19.621485 | orchestrator | + ethertype = "IPv4" 2026-01-17 00:02:19.621489 | orchestrator | + id = (known after apply) 2026-01-17 00:02:19.621493 | orchestrator | + port_range_max = 51820 2026-01-17 00:02:19.621497 | orchestrator | + port_range_min = 51820 2026-01-17 00:02:19.621501 | orchestrator | + protocol = "udp" 2026-01-17 00:02:19.621599 | orchestrator | + region = (known after apply) 2026-01-17 00:02:19.621604 | orchestrator | + remote_address_group_id = (known after apply) 2026-01-17 00:02:19.621608 | orchestrator | + remote_group_id = (known after apply) 2026-01-17 00:02:19.621612 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-01-17 00:02:19.621616 | orchestrator | + security_group_id = (known after apply) 2026-01-17 00:02:19.621620 | orchestrator | + tenant_id = (known after apply) 2026-01-17 00:02:19.621623 | orchestrator | } 2026-01-17 00:02:19.621627 | orchestrator | 2026-01-17 00:02:19.621631 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule3 will be created 2026-01-17 00:02:19.621635 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule3" { 2026-01-17 00:02:19.621638 | orchestrator | + direction = "ingress" 2026-01-17 00:02:19.621642 | orchestrator | + ethertype = "IPv4" 2026-01-17 00:02:19.621646 | orchestrator | + id = (known after apply) 2026-01-17 00:02:19.621649 | orchestrator | + protocol = "tcp" 2026-01-17 00:02:19.621653 | orchestrator | + region = (known after apply) 2026-01-17 00:02:19.621660 | orchestrator | + remote_address_group_id = (known after apply) 2026-01-17 00:02:19.621679 | orchestrator | + remote_group_id = (known after apply) 2026-01-17 00:02:19.621683 | orchestrator | + remote_ip_prefix = "192.168.16.0/20" 2026-01-17 00:02:19.621687 | orchestrator | + security_group_id = (known after apply) 2026-01-17 00:02:19.621691 | orchestrator | + tenant_id = (known after apply) 2026-01-17 00:02:19.621695 | orchestrator | } 2026-01-17 00:02:19.621699 | orchestrator | 2026-01-17 00:02:19.621702 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule4 will be created 2026-01-17 00:02:19.621706 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule4" { 2026-01-17 00:02:19.621710 | orchestrator | + direction = "ingress" 2026-01-17 00:02:19.621714 | orchestrator | + ethertype = "IPv4" 2026-01-17 00:02:19.621717 | orchestrator | + id = (known after apply) 2026-01-17 00:02:19.621721 | orchestrator | + protocol = "udp" 2026-01-17 00:02:19.621725 | orchestrator | + region = (known after apply) 2026-01-17 00:02:19.621729 | orchestrator | + remote_address_group_id = (known after apply) 2026-01-17 00:02:19.621732 | orchestrator | + remote_group_id = (known after apply) 2026-01-17 00:02:19.621736 | orchestrator | + remote_ip_prefix = "192.168.16.0/20" 2026-01-17 00:02:19.621754 | orchestrator | + security_group_id = (known after apply) 2026-01-17 00:02:19.621759 | orchestrator | + tenant_id = (known after apply) 2026-01-17 00:02:19.621762 | orchestrator | } 2026-01-17 00:02:19.621766 | orchestrator | 2026-01-17 00:02:19.621770 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule5 will be created 2026-01-17 00:02:19.621777 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule5" { 2026-01-17 00:02:19.621792 | orchestrator | + direction = "ingress" 2026-01-17 00:02:19.621796 | orchestrator | + ethertype = "IPv4" 2026-01-17 00:02:19.621800 | orchestrator | + id = (known after apply) 2026-01-17 00:02:19.621803 | orchestrator | + protocol = "icmp" 2026-01-17 00:02:19.621807 | orchestrator | + region = (known after apply) 2026-01-17 00:02:19.621811 | orchestrator | + remote_address_group_id = (known after apply) 2026-01-17 00:02:19.621815 | orchestrator | + remote_group_id = (known after apply) 2026-01-17 00:02:19.621834 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-01-17 00:02:19.621838 | orchestrator | + security_group_id = (known after apply) 2026-01-17 00:02:19.621841 | orchestrator | + tenant_id = (known after apply) 2026-01-17 00:02:19.621845 | orchestrator | } 2026-01-17 00:02:19.621849 | orchestrator | 2026-01-17 00:02:19.621853 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_node_rule1 will be created 2026-01-17 00:02:19.621857 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule1" { 2026-01-17 00:02:19.621860 | orchestrator | + direction = "ingress" 2026-01-17 00:02:19.621864 | orchestrator | + ethertype = "IPv4" 2026-01-17 00:02:19.621868 | orchestrator | + id = (known after apply) 2026-01-17 00:02:19.621871 | orchestrator | + protocol = "tcp" 2026-01-17 00:02:19.621875 | orchestrator | + region = (known after apply) 2026-01-17 00:02:19.621879 | orchestrator | + remote_address_group_id = (known after apply) 2026-01-17 00:02:19.621885 | orchestrator | + remote_group_id = (known after apply) 2026-01-17 00:02:19.621889 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-01-17 00:02:19.621893 | orchestrator | + security_group_id = (known after apply) 2026-01-17 00:02:19.621911 | orchestrator | + tenant_id = (known after apply) 2026-01-17 00:02:19.621915 | orchestrator | } 2026-01-17 00:02:19.621919 | orchestrator | 2026-01-17 00:02:19.621923 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_node_rule2 will be created 2026-01-17 00:02:19.621927 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule2" { 2026-01-17 00:02:19.621930 | orchestrator | + direction = "ingress" 2026-01-17 00:02:19.621934 | orchestrator | + ethertype = "IPv4" 2026-01-17 00:02:19.621938 | orchestrator | + id = (known after apply) 2026-01-17 00:02:19.621942 | orchestrator | + protocol = "udp" 2026-01-17 00:02:19.621945 | orchestrator | + region = (known after apply) 2026-01-17 00:02:19.621949 | orchestrator | + remote_address_group_id = (known after apply) 2026-01-17 00:02:19.621953 | orchestrator | + remote_group_id = (known after apply) 2026-01-17 00:02:19.621956 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-01-17 00:02:19.621960 | orchestrator | + security_group_id = (known after apply) 2026-01-17 00:02:19.621964 | orchestrator | + tenant_id = (known after apply) 2026-01-17 00:02:19.621967 | orchestrator | } 2026-01-17 00:02:19.621971 | orchestrator | 2026-01-17 00:02:19.622037 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_node_rule3 will be created 2026-01-17 00:02:19.622042 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule3" { 2026-01-17 00:02:19.622046 | orchestrator | + direction = "ingress" 2026-01-17 00:02:19.622052 | orchestrator | + ethertype = "IPv4" 2026-01-17 00:02:19.622056 | orchestrator | + id = (known after apply) 2026-01-17 00:02:19.622060 | orchestrator | + protocol = "icmp" 2026-01-17 00:02:19.622064 | orchestrator | + region = (known after apply) 2026-01-17 00:02:19.622067 | orchestrator | + remote_address_group_id = (known after apply) 2026-01-17 00:02:19.622071 | orchestrator | + remote_group_id = (known after apply) 2026-01-17 00:02:19.622075 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-01-17 00:02:19.622078 | orchestrator | + security_group_id = (known after apply) 2026-01-17 00:02:19.622082 | orchestrator | + tenant_id = (known after apply) 2026-01-17 00:02:19.622089 | orchestrator | } 2026-01-17 00:02:19.622093 | orchestrator | 2026-01-17 00:02:19.622112 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_rule_vrrp will be created 2026-01-17 00:02:19.622116 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_rule_vrrp" { 2026-01-17 00:02:19.622120 | orchestrator | + description = "vrrp" 2026-01-17 00:02:19.622124 | orchestrator | + direction = "ingress" 2026-01-17 00:02:19.622127 | orchestrator | + ethertype = "IPv4" 2026-01-17 00:02:19.622131 | orchestrator | + id = (known after apply) 2026-01-17 00:02:19.622135 | orchestrator | + protocol = "112" 2026-01-17 00:02:19.622139 | orchestrator | + region = (known after apply) 2026-01-17 00:02:19.622146 | orchestrator | + remote_address_group_id = (known after apply) 2026-01-17 00:02:19.622150 | orchestrator | + remote_group_id = (known after apply) 2026-01-17 00:02:19.622153 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-01-17 00:02:19.622157 | orchestrator | + security_group_id = (known after apply) 2026-01-17 00:02:19.622161 | orchestrator | + tenant_id = (known after apply) 2026-01-17 00:02:19.622164 | orchestrator | } 2026-01-17 00:02:19.622168 | orchestrator | 2026-01-17 00:02:19.622172 | orchestrator | # openstack_networking_secgroup_v2.security_group_management will be created 2026-01-17 00:02:19.622192 | orchestrator | + resource "openstack_networking_secgroup_v2" "security_group_management" { 2026-01-17 00:02:19.622198 | orchestrator | + all_tags = (known after apply) 2026-01-17 00:02:19.622202 | orchestrator | + description = "management security group" 2026-01-17 00:02:19.622206 | orchestrator | + id = (known after apply) 2026-01-17 00:02:19.622209 | orchestrator | + name = "testbed-management" 2026-01-17 00:02:19.622213 | orchestrator | + region = (known after apply) 2026-01-17 00:02:19.622217 | orchestrator | + stateful = (known after apply) 2026-01-17 00:02:19.622220 | orchestrator | + tenant_id = (known after apply) 2026-01-17 00:02:19.622224 | orchestrator | } 2026-01-17 00:02:19.622228 | orchestrator | 2026-01-17 00:02:19.622232 | orchestrator | # openstack_networking_secgroup_v2.security_group_node will be created 2026-01-17 00:02:19.622235 | orchestrator | + resource "openstack_networking_secgroup_v2" "security_group_node" { 2026-01-17 00:02:19.622239 | orchestrator | + all_tags = (known after apply) 2026-01-17 00:02:19.622243 | orchestrator | + description = "node security group" 2026-01-17 00:02:19.622247 | orchestrator | + id = (known after apply) 2026-01-17 00:02:19.622250 | orchestrator | + name = "testbed-node" 2026-01-17 00:02:19.622348 | orchestrator | + region = (known after apply) 2026-01-17 00:02:19.622353 | orchestrator | + stateful = (known after apply) 2026-01-17 00:02:19.622357 | orchestrator | + tenant_id = (known after apply) 2026-01-17 00:02:19.622360 | orchestrator | } 2026-01-17 00:02:19.622364 | orchestrator | 2026-01-17 00:02:19.622368 | orchestrator | # openstack_networking_subnet_v2.subnet_management will be created 2026-01-17 00:02:19.622372 | orchestrator | + resource "openstack_networking_subnet_v2" "subnet_management" { 2026-01-17 00:02:19.622375 | orchestrator | + all_tags = (known after apply) 2026-01-17 00:02:19.622379 | orchestrator | + cidr = "192.168.16.0/20" 2026-01-17 00:02:19.622383 | orchestrator | + dns_nameservers = [ 2026-01-17 00:02:19.622387 | orchestrator | + "8.8.8.8", 2026-01-17 00:02:19.622391 | orchestrator | + "9.9.9.9", 2026-01-17 00:02:19.622394 | orchestrator | ] 2026-01-17 00:02:19.622398 | orchestrator | + enable_dhcp = true 2026-01-17 00:02:19.622402 | orchestrator | + gateway_ip = (known after apply) 2026-01-17 00:02:19.622406 | orchestrator | + id = (known after apply) 2026-01-17 00:02:19.622424 | orchestrator | + ip_version = 4 2026-01-17 00:02:19.622428 | orchestrator | + ipv6_address_mode = (known after apply) 2026-01-17 00:02:19.622432 | orchestrator | + ipv6_ra_mode = (known after apply) 2026-01-17 00:02:19.622436 | orchestrator | + name = "subnet-testbed-management" 2026-01-17 00:02:19.622440 | orchestrator | + network_id = (known after apply) 2026-01-17 00:02:19.622443 | orchestrator | + no_gateway = false 2026-01-17 00:02:19.622447 | orchestrator | + region = (known after apply) 2026-01-17 00:02:19.622451 | orchestrator | + service_types = (known after apply) 2026-01-17 00:02:19.622460 | orchestrator | + tenant_id = (known after apply) 2026-01-17 00:02:19.622464 | orchestrator | 2026-01-17 00:02:19.622468 | orchestrator | + allocation_pool { 2026-01-17 00:02:19.622472 | orchestrator | + end = "192.168.31.250" 2026-01-17 00:02:19.622475 | orchestrator | + start = "192.168.31.200" 2026-01-17 00:02:19.622479 | orchestrator | } 2026-01-17 00:02:19.622483 | orchestrator | } 2026-01-17 00:02:19.622487 | orchestrator | 2026-01-17 00:02:19.622507 | orchestrator | # terraform_data.image will be created 2026-01-17 00:02:19.622511 | orchestrator | + resource "terraform_data" "image" { 2026-01-17 00:02:19.622515 | orchestrator | + id = (known after apply) 2026-01-17 00:02:19.622519 | orchestrator | + input = "Ubuntu 24.04" 2026-01-17 00:02:19.622522 | orchestrator | + output = (known after apply) 2026-01-17 00:02:19.622526 | orchestrator | } 2026-01-17 00:02:19.622530 | orchestrator | 2026-01-17 00:02:19.622533 | orchestrator | # terraform_data.image_node will be created 2026-01-17 00:02:19.622537 | orchestrator | + resource "terraform_data" "image_node" { 2026-01-17 00:02:19.622541 | orchestrator | + id = (known after apply) 2026-01-17 00:02:19.622544 | orchestrator | + input = "Ubuntu 24.04" 2026-01-17 00:02:19.622548 | orchestrator | + output = (known after apply) 2026-01-17 00:02:19.622552 | orchestrator | } 2026-01-17 00:02:19.622556 | orchestrator | 2026-01-17 00:02:19.622559 | orchestrator | Plan: 64 to add, 0 to change, 0 to destroy. 2026-01-17 00:02:19.622563 | orchestrator | 2026-01-17 00:02:19.622581 | orchestrator | Changes to Outputs: 2026-01-17 00:02:19.622585 | orchestrator | + manager_address = (sensitive value) 2026-01-17 00:02:19.622589 | orchestrator | + private_key = (sensitive value) 2026-01-17 00:02:22.294095 | orchestrator | terraform_data.image: Creating... 2026-01-17 00:02:22.294315 | orchestrator | terraform_data.image: Creation complete after 0s [id=91f68c01-93f2-4d41-14ee-407f61f1e370] 2026-01-17 00:02:22.295172 | orchestrator | terraform_data.image_node: Creating... 2026-01-17 00:02:22.295449 | orchestrator | terraform_data.image_node: Creation complete after 0s [id=2ffa4ec3-4be8-b210-842f-eb2d3f7187e8] 2026-01-17 00:02:22.313729 | orchestrator | data.openstack_images_image_v2.image_node: Reading... 2026-01-17 00:02:22.315017 | orchestrator | data.openstack_images_image_v2.image: Reading... 2026-01-17 00:02:22.322549 | orchestrator | openstack_compute_keypair_v2.key: Creating... 2026-01-17 00:02:22.322598 | orchestrator | openstack_blockstorage_volume_v3.node_volume[0]: Creating... 2026-01-17 00:02:22.351690 | orchestrator | openstack_blockstorage_volume_v3.node_volume[4]: Creating... 2026-01-17 00:02:22.357127 | orchestrator | openstack_networking_network_v2.net_management: Creating... 2026-01-17 00:02:22.357205 | orchestrator | openstack_blockstorage_volume_v3.node_volume[6]: Creating... 2026-01-17 00:02:22.357211 | orchestrator | openstack_blockstorage_volume_v3.node_volume[5]: Creating... 2026-01-17 00:02:22.374102 | orchestrator | openstack_blockstorage_volume_v3.node_volume[8]: Creating... 2026-01-17 00:02:22.374172 | orchestrator | openstack_blockstorage_volume_v3.node_volume[3]: Creating... 2026-01-17 00:02:22.797590 | orchestrator | data.openstack_images_image_v2.image: Read complete after 1s [id=846820b2-039e-4b42-adad-daf72e0f8ea4] 2026-01-17 00:02:22.803298 | orchestrator | data.openstack_images_image_v2.image_node: Read complete after 1s [id=846820b2-039e-4b42-adad-daf72e0f8ea4] 2026-01-17 00:02:22.805689 | orchestrator | openstack_blockstorage_volume_v3.node_volume[1]: Creating... 2026-01-17 00:02:22.810385 | orchestrator | openstack_blockstorage_volume_v3.node_volume[2]: Creating... 2026-01-17 00:02:22.849098 | orchestrator | openstack_compute_keypair_v2.key: Creation complete after 1s [id=testbed] 2026-01-17 00:02:22.861266 | orchestrator | openstack_blockstorage_volume_v3.node_volume[7]: Creating... 2026-01-17 00:02:23.568143 | orchestrator | openstack_networking_network_v2.net_management: Creation complete after 2s [id=e639c13a-c093-4bd3-b555-76c3e71b56a9] 2026-01-17 00:02:23.579029 | orchestrator | openstack_blockstorage_volume_v3.manager_base_volume[0]: Creating... 2026-01-17 00:02:25.966757 | orchestrator | openstack_blockstorage_volume_v3.node_volume[0]: Creation complete after 4s [id=03c99a05-96d9-4471-aa9e-2837c3fbd541] 2026-01-17 00:02:25.982456 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[0]: Creating... 2026-01-17 00:02:25.998740 | orchestrator | openstack_blockstorage_volume_v3.node_volume[3]: Creation complete after 4s [id=66cad329-aa8c-4366-8769-2bca3a7bcb41] 2026-01-17 00:02:26.010176 | orchestrator | local_sensitive_file.id_rsa: Creating... 2026-01-17 00:02:26.013832 | orchestrator | local_sensitive_file.id_rsa: Creation complete after 0s [id=0d0df1988831df2968e357964842528d7e7a75df] 2026-01-17 00:02:26.018671 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[2]: Creating... 2026-01-17 00:02:26.021623 | orchestrator | openstack_blockstorage_volume_v3.node_volume[6]: Creation complete after 4s [id=386eb8af-61b6-405b-8873-9456a29b0ccf] 2026-01-17 00:02:26.029882 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[4]: Creating... 2026-01-17 00:02:26.041442 | orchestrator | openstack_blockstorage_volume_v3.node_volume[4]: Creation complete after 4s [id=1215eb05-d4be-4bfd-8c82-e464703dc233] 2026-01-17 00:02:26.050691 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[1]: Creating... 2026-01-17 00:02:26.062996 | orchestrator | openstack_blockstorage_volume_v3.node_volume[5]: Creation complete after 4s [id=b2725b1a-ab02-479a-b1d7-829717bc50e1] 2026-01-17 00:02:26.068334 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[5]: Creating... 2026-01-17 00:02:26.082670 | orchestrator | openstack_blockstorage_volume_v3.node_volume[8]: Creation complete after 4s [id=653651ff-f0c3-4f93-a415-b7bde2938506] 2026-01-17 00:02:26.089999 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[3]: Creating... 2026-01-17 00:02:26.110433 | orchestrator | openstack_blockstorage_volume_v3.node_volume[1]: Creation complete after 3s [id=bd9e2794-f462-41d3-bb22-ac4c4b73281f] 2026-01-17 00:02:26.111009 | orchestrator | openstack_blockstorage_volume_v3.node_volume[2]: Creation complete after 3s [id=3748448b-4cb4-41ff-a93c-c2a900d49ce0] 2026-01-17 00:02:26.120180 | orchestrator | openstack_networking_subnet_v2.subnet_management: Creating... 2026-01-17 00:02:26.125762 | orchestrator | openstack_blockstorage_volume_v3.node_volume[7]: Creation complete after 3s [id=89953a4d-629d-4187-87cb-8eaa4172afa2] 2026-01-17 00:02:26.126929 | orchestrator | local_file.id_rsa_pub: Creating... 2026-01-17 00:02:26.131682 | orchestrator | local_file.id_rsa_pub: Creation complete after 0s [id=800be5141a53d53191716737cdef609e0ce021f7] 2026-01-17 00:02:26.952122 | orchestrator | openstack_blockstorage_volume_v3.manager_base_volume[0]: Creation complete after 3s [id=c335c01d-edbe-4951-afc9-d3f303e71c9e] 2026-01-17 00:02:27.041267 | orchestrator | openstack_networking_subnet_v2.subnet_management: Creation complete after 1s [id=cf62fa07-b70a-4a30-ac6c-5fef43f0538f] 2026-01-17 00:02:27.053417 | orchestrator | openstack_networking_router_v2.router: Creating... 2026-01-17 00:02:29.359988 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[0]: Creation complete after 3s [id=4c6ad37b-235a-42f0-84c6-49b8561a2d55] 2026-01-17 00:02:29.442801 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[2]: Creation complete after 3s [id=26472a97-710b-416f-a0d1-a56c77a5a98a] 2026-01-17 00:02:29.472054 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[1]: Creation complete after 3s [id=c932fa08-39f8-42bb-b31a-2bfdbc19349f] 2026-01-17 00:02:29.512459 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[5]: Creation complete after 4s [id=4a12610b-6fe3-4cad-9944-f8a257ec9d82] 2026-01-17 00:02:30.153922 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[4]: Creation complete after 4s [id=40676af1-bb63-41c0-bff5-9ddc0a326d9b] 2026-01-17 00:02:30.323128 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[3]: Creation complete after 4s [id=a81b0aae-ecd3-46bc-81d3-c119638f529b] 2026-01-17 00:02:30.769288 | orchestrator | openstack_networking_router_v2.router: Creation complete after 4s [id=4381ecb5-2906-41ab-956f-1ad3a4b98f33] 2026-01-17 00:02:30.782447 | orchestrator | openstack_networking_router_interface_v2.router_interface: Creating... 2026-01-17 00:02:30.783749 | orchestrator | openstack_networking_secgroup_v2.security_group_node: Creating... 2026-01-17 00:02:30.784010 | orchestrator | openstack_networking_secgroup_v2.security_group_management: Creating... 2026-01-17 00:02:31.038194 | orchestrator | openstack_networking_secgroup_v2.security_group_node: Creation complete after 0s [id=9b057df8-2b21-42ce-bbbd-e825cef72801] 2026-01-17 00:02:31.046825 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule1: Creating... 2026-01-17 00:02:31.052233 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule3: Creating... 2026-01-17 00:02:31.054711 | orchestrator | openstack_networking_port_v2.node_port_management[1]: Creating... 2026-01-17 00:02:31.063937 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule2: Creating... 2026-01-17 00:02:31.068430 | orchestrator | openstack_networking_port_v2.node_port_management[2]: Creating... 2026-01-17 00:02:31.068449 | orchestrator | openstack_networking_port_v2.node_port_management[5]: Creating... 2026-01-17 00:02:31.068456 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_rule_vrrp: Creating... 2026-01-17 00:02:31.077911 | orchestrator | openstack_networking_port_v2.node_port_management[0]: Creating... 2026-01-17 00:02:31.090606 | orchestrator | openstack_networking_secgroup_v2.security_group_management: Creation complete after 0s [id=02fe56fb-5c26-4deb-808f-20d34db3b2b8] 2026-01-17 00:02:31.099234 | orchestrator | openstack_networking_port_v2.node_port_management[3]: Creating... 2026-01-17 00:02:31.292398 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule1: Creation complete after 0s [id=d544690d-b2bc-4cc5-9a7b-6144d15f60e7] 2026-01-17 00:02:31.303851 | orchestrator | openstack_networking_port_v2.node_port_management[4]: Creating... 2026-01-17 00:02:31.923633 | orchestrator | openstack_networking_port_v2.node_port_management[5]: Creation complete after 1s [id=81f47756-9d97-4d39-9f71-e5c8aba7bb9d] 2026-01-17 00:02:31.931533 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule2: Creating... 2026-01-17 00:02:32.108876 | orchestrator | openstack_networking_port_v2.node_port_management[1]: Creation complete after 1s [id=7a2a289e-19ea-4efa-866f-1811fbe49507] 2026-01-17 00:02:32.113724 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule5: Creating... 2026-01-17 00:02:32.143057 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule2: Creation complete after 0s [id=d5cda626-c5a4-4adf-b17f-21aedf5decdd] 2026-01-17 00:02:32.150285 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule1: Creating... 2026-01-17 00:02:32.150382 | orchestrator | openstack_networking_port_v2.node_port_management[2]: Creation complete after 1s [id=99e31e24-1a78-4f10-a4a6-6f23b1729a77] 2026-01-17 00:02:32.155416 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule4: Creating... 2026-01-17 00:02:32.259593 | orchestrator | openstack_networking_port_v2.node_port_management[4]: Creation complete after 1s [id=762b1307-a5f7-4233-9c80-0281088d35a8] 2026-01-17 00:02:32.816959 | orchestrator | openstack_networking_port_v2.manager_port_management: Creating... 2026-01-17 00:02:32.817016 | orchestrator | openstack_networking_port_v2.node_port_management[0]: Creation complete after 1s [id=cdeb767a-63e9-4898-8f2b-05342ede77c3] 2026-01-17 00:02:32.817026 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule3: Creating... 2026-01-17 00:02:32.817035 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule5: Creation complete after 1s [id=43151515-5ee4-47f3-bb1e-f48af6271cd9] 2026-01-17 00:02:32.817062 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule3: Creation complete after 2s [id=06f1e9d1-2152-4ded-ad51-c1cc0f4f2523] 2026-01-17 00:02:32.817073 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule2: Creation complete after 2s [id=15e4772d-7751-434c-9763-7ffc10902733] 2026-01-17 00:02:32.856828 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule1: Creation complete after 1s [id=fac01ef4-f55e-4214-9bca-b1b84ddee44f] 2026-01-17 00:02:33.056170 | orchestrator | openstack_networking_port_v2.node_port_management[3]: Creation complete after 2s [id=ed7b6c2b-3224-4430-b983-b41dab157d32] 2026-01-17 00:02:33.398108 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_rule_vrrp: Creation complete after 2s [id=daadfa2a-f431-419c-97e4-64b4c73ee670] 2026-01-17 00:02:33.473010 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule4: Creation complete after 1s [id=8412fc3b-75f3-43f6-8bce-89036f630bd2] 2026-01-17 00:02:33.607024 | orchestrator | openstack_networking_port_v2.manager_port_management: Creation complete after 2s [id=7fa69aef-8fe1-4046-89e4-277ea1bbb5c0] 2026-01-17 00:02:33.710159 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule3: Creation complete after 2s [id=ad778874-1947-43e7-85af-fa74ac883d40] 2026-01-17 00:02:36.646708 | orchestrator | openstack_networking_router_interface_v2.router_interface: Creation complete after 6s [id=052def7d-cbdc-46fc-881d-83d462af73a0] 2026-01-17 00:02:36.664778 | orchestrator | openstack_networking_floatingip_v2.manager_floating_ip: Creating... 2026-01-17 00:02:36.681547 | orchestrator | openstack_compute_instance_v2.node_server[2]: Creating... 2026-01-17 00:02:36.683311 | orchestrator | openstack_compute_instance_v2.node_server[1]: Creating... 2026-01-17 00:02:36.687434 | orchestrator | openstack_compute_instance_v2.node_server[0]: Creating... 2026-01-17 00:02:36.689230 | orchestrator | openstack_compute_instance_v2.node_server[3]: Creating... 2026-01-17 00:02:36.708075 | orchestrator | openstack_compute_instance_v2.node_server[5]: Creating... 2026-01-17 00:02:36.708284 | orchestrator | openstack_compute_instance_v2.node_server[4]: Creating... 2026-01-17 00:02:38.677096 | orchestrator | openstack_networking_floatingip_v2.manager_floating_ip: Creation complete after 2s [id=886465ee-e7e9-473b-8ca5-cd23ba7cc372] 2026-01-17 00:02:38.682713 | orchestrator | openstack_networking_floatingip_associate_v2.manager_floating_ip_association: Creating... 2026-01-17 00:02:38.687121 | orchestrator | local_file.inventory: Creating... 2026-01-17 00:02:38.689764 | orchestrator | local_file.MANAGER_ADDRESS: Creating... 2026-01-17 00:02:38.691017 | orchestrator | local_file.inventory: Creation complete after 0s [id=a12b137fa0fb73afde356f8a0c7f8c16c7d3cd80] 2026-01-17 00:02:38.695161 | orchestrator | local_file.MANAGER_ADDRESS: Creation complete after 0s [id=4116023079b37148b0950b4ea78bc9bfbd3a63a9] 2026-01-17 00:02:39.553675 | orchestrator | openstack_networking_floatingip_associate_v2.manager_floating_ip_association: Creation complete after 1s [id=886465ee-e7e9-473b-8ca5-cd23ba7cc372] 2026-01-17 00:02:46.690319 | orchestrator | openstack_compute_instance_v2.node_server[2]: Still creating... [10s elapsed] 2026-01-17 00:02:46.690434 | orchestrator | openstack_compute_instance_v2.node_server[1]: Still creating... [10s elapsed] 2026-01-17 00:02:46.690474 | orchestrator | openstack_compute_instance_v2.node_server[0]: Still creating... [10s elapsed] 2026-01-17 00:02:46.691675 | orchestrator | openstack_compute_instance_v2.node_server[3]: Still creating... [10s elapsed] 2026-01-17 00:02:46.709131 | orchestrator | openstack_compute_instance_v2.node_server[5]: Still creating... [10s elapsed] 2026-01-17 00:02:46.709191 | orchestrator | openstack_compute_instance_v2.node_server[4]: Still creating... [10s elapsed] 2026-01-17 00:02:56.699212 | orchestrator | openstack_compute_instance_v2.node_server[0]: Still creating... [20s elapsed] 2026-01-17 00:02:56.699306 | orchestrator | openstack_compute_instance_v2.node_server[2]: Still creating... [20s elapsed] 2026-01-17 00:02:56.699328 | orchestrator | openstack_compute_instance_v2.node_server[1]: Still creating... [20s elapsed] 2026-01-17 00:02:56.699338 | orchestrator | openstack_compute_instance_v2.node_server[3]: Still creating... [20s elapsed] 2026-01-17 00:02:56.709911 | orchestrator | openstack_compute_instance_v2.node_server[5]: Still creating... [20s elapsed] 2026-01-17 00:02:56.709980 | orchestrator | openstack_compute_instance_v2.node_server[4]: Still creating... [20s elapsed] 2026-01-17 00:03:06.703446 | orchestrator | openstack_compute_instance_v2.node_server[3]: Still creating... [30s elapsed] 2026-01-17 00:03:06.703656 | orchestrator | openstack_compute_instance_v2.node_server[1]: Still creating... [30s elapsed] 2026-01-17 00:03:06.703674 | orchestrator | openstack_compute_instance_v2.node_server[0]: Still creating... [30s elapsed] 2026-01-17 00:03:06.703682 | orchestrator | openstack_compute_instance_v2.node_server[2]: Still creating... [30s elapsed] 2026-01-17 00:03:06.710871 | orchestrator | openstack_compute_instance_v2.node_server[5]: Still creating... [30s elapsed] 2026-01-17 00:03:06.710954 | orchestrator | openstack_compute_instance_v2.node_server[4]: Still creating... [30s elapsed] 2026-01-17 00:03:07.371857 | orchestrator | openstack_compute_instance_v2.node_server[0]: Creation complete after 30s [id=75f751a7-914f-4707-aa01-fa10d140da68] 2026-01-17 00:03:07.542075 | orchestrator | openstack_compute_instance_v2.node_server[1]: Creation complete after 31s [id=3e6c3716-c638-49d7-bbc8-01b435012b84] 2026-01-17 00:03:16.712304 | orchestrator | openstack_compute_instance_v2.node_server[4]: Still creating... [40s elapsed] 2026-01-17 00:03:16.712391 | orchestrator | openstack_compute_instance_v2.node_server[3]: Still creating... [40s elapsed] 2026-01-17 00:03:16.712397 | orchestrator | openstack_compute_instance_v2.node_server[2]: Still creating... [40s elapsed] 2026-01-17 00:03:16.712427 | orchestrator | openstack_compute_instance_v2.node_server[5]: Still creating... [40s elapsed] 2026-01-17 00:03:18.247626 | orchestrator | openstack_compute_instance_v2.node_server[2]: Creation complete after 41s [id=c3a3f1ca-17b1-43de-9bef-8b3701142c19] 2026-01-17 00:03:26.718443 | orchestrator | openstack_compute_instance_v2.node_server[5]: Still creating... [50s elapsed] 2026-01-17 00:03:26.718545 | orchestrator | openstack_compute_instance_v2.node_server[3]: Still creating... [50s elapsed] 2026-01-17 00:03:26.718558 | orchestrator | openstack_compute_instance_v2.node_server[4]: Still creating... [50s elapsed] 2026-01-17 00:03:36.718713 | orchestrator | openstack_compute_instance_v2.node_server[5]: Still creating... [1m0s elapsed] 2026-01-17 00:03:36.718821 | orchestrator | openstack_compute_instance_v2.node_server[4]: Still creating... [1m0s elapsed] 2026-01-17 00:03:36.718830 | orchestrator | openstack_compute_instance_v2.node_server[3]: Still creating... [1m0s elapsed] 2026-01-17 00:03:37.692005 | orchestrator | openstack_compute_instance_v2.node_server[3]: Creation complete after 1m1s [id=2c03b29a-e3c1-4e31-8281-988a5fdda6d8] 2026-01-17 00:03:37.812442 | orchestrator | openstack_compute_instance_v2.node_server[4]: Creation complete after 1m1s [id=ff54eed9-86e0-4a0d-9c45-4b835b1366f6] 2026-01-17 00:03:37.980281 | orchestrator | openstack_compute_instance_v2.node_server[5]: Creation complete after 1m1s [id=7281aaa8-b66f-4e24-8549-cd8ec6783022] 2026-01-17 00:03:37.992114 | orchestrator | null_resource.node_semaphore: Creating... 2026-01-17 00:03:37.993088 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[5]: Creating... 2026-01-17 00:03:38.004932 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[3]: Creating... 2026-01-17 00:03:38.007449 | orchestrator | null_resource.node_semaphore: Creation complete after 0s [id=8961674894552902515] 2026-01-17 00:03:38.011777 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[8]: Creating... 2026-01-17 00:03:38.011831 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[6]: Creating... 2026-01-17 00:03:38.012012 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[0]: Creating... 2026-01-17 00:03:38.012166 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[4]: Creating... 2026-01-17 00:03:38.013276 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[2]: Creating... 2026-01-17 00:03:38.029387 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[1]: Creating... 2026-01-17 00:03:38.045504 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[7]: Creating... 2026-01-17 00:03:38.052850 | orchestrator | openstack_compute_instance_v2.manager_server: Creating... 2026-01-17 00:03:41.478921 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[3]: Creation complete after 3s [id=2c03b29a-e3c1-4e31-8281-988a5fdda6d8/66cad329-aa8c-4366-8769-2bca3a7bcb41] 2026-01-17 00:03:41.484474 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[4]: Creation complete after 3s [id=ff54eed9-86e0-4a0d-9c45-4b835b1366f6/1215eb05-d4be-4bfd-8c82-e464703dc233] 2026-01-17 00:03:41.613684 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[6]: Creation complete after 4s [id=2c03b29a-e3c1-4e31-8281-988a5fdda6d8/386eb8af-61b6-405b-8873-9456a29b0ccf] 2026-01-17 00:03:41.639617 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[2]: Creation complete after 4s [id=7281aaa8-b66f-4e24-8549-cd8ec6783022/3748448b-4cb4-41ff-a93c-c2a900d49ce0] 2026-01-17 00:03:41.695300 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[5]: Creation complete after 4s [id=7281aaa8-b66f-4e24-8549-cd8ec6783022/b2725b1a-ab02-479a-b1d7-829717bc50e1] 2026-01-17 00:03:43.160835 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[0]: Creation complete after 5s [id=2c03b29a-e3c1-4e31-8281-988a5fdda6d8/03c99a05-96d9-4471-aa9e-2837c3fbd541] 2026-01-17 00:03:43.478065 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[1]: Creation complete after 5s [id=ff54eed9-86e0-4a0d-9c45-4b835b1366f6/bd9e2794-f462-41d3-bb22-ac4c4b73281f] 2026-01-17 00:03:43.558144 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[7]: Creation complete after 6s [id=ff54eed9-86e0-4a0d-9c45-4b835b1366f6/89953a4d-629d-4187-87cb-8eaa4172afa2] 2026-01-17 00:03:48.011999 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[8]: Still creating... [10s elapsed] 2026-01-17 00:03:48.053212 | orchestrator | openstack_compute_instance_v2.manager_server: Still creating... [10s elapsed] 2026-01-17 00:03:48.181025 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[8]: Creation complete after 10s [id=7281aaa8-b66f-4e24-8549-cd8ec6783022/653651ff-f0c3-4f93-a415-b7bde2938506] 2026-01-17 00:03:58.054307 | orchestrator | openstack_compute_instance_v2.manager_server: Still creating... [20s elapsed] 2026-01-17 00:03:58.643836 | orchestrator | openstack_compute_instance_v2.manager_server: Creation complete after 21s [id=0cdc1f18-4f85-4671-9795-552e69c0f1ce] 2026-01-17 00:03:58.670085 | orchestrator | 2026-01-17 00:03:58.670135 | orchestrator | Apply complete! Resources: 64 added, 0 changed, 0 destroyed. 2026-01-17 00:03:58.670143 | orchestrator | 2026-01-17 00:03:58.670147 | orchestrator | Outputs: 2026-01-17 00:03:58.670150 | orchestrator | 2026-01-17 00:03:58.670154 | orchestrator | manager_address = 2026-01-17 00:03:58.670157 | orchestrator | private_key = 2026-01-17 00:03:58.884947 | orchestrator | ok: Runtime: 0:01:45.090413 2026-01-17 00:03:58.918337 | 2026-01-17 00:03:58.918477 | TASK [Create infrastructure (stable)] 2026-01-17 00:03:59.456722 | orchestrator | skipping: Conditional result was False 2026-01-17 00:03:59.467302 | 2026-01-17 00:03:59.467434 | TASK [Fetch manager address] 2026-01-17 00:03:59.935279 | orchestrator | ok 2026-01-17 00:03:59.948230 | 2026-01-17 00:03:59.948435 | TASK [Set manager_host address] 2026-01-17 00:04:00.031891 | orchestrator | ok 2026-01-17 00:04:00.042202 | 2026-01-17 00:04:00.042346 | LOOP [Update ansible collections] 2026-01-17 00:04:01.184616 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2026-01-17 00:04:01.185706 | orchestrator | [WARNING]: Collection osism.services does not support Ansible version 2.15.2 2026-01-17 00:04:01.186167 | orchestrator | Starting galaxy collection install process 2026-01-17 00:04:01.186393 | orchestrator | Process install dependency map 2026-01-17 00:04:01.186475 | orchestrator | Starting collection install process 2026-01-17 00:04:01.186614 | orchestrator | Installing 'osism.commons:999.0.0' to '/home/zuul-testbed06/.ansible/collections/ansible_collections/osism/commons' 2026-01-17 00:04:01.186647 | orchestrator | Created collection for osism.commons:999.0.0 at /home/zuul-testbed06/.ansible/collections/ansible_collections/osism/commons 2026-01-17 00:04:01.186686 | orchestrator | osism.commons:999.0.0 was installed successfully 2026-01-17 00:04:01.186747 | orchestrator | ok: Item: commons Runtime: 0:00:00.724979 2026-01-17 00:04:02.172757 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2026-01-17 00:04:02.172953 | orchestrator | [WARNING]: Collection osism.services does not support Ansible version 2.15.2 2026-01-17 00:04:02.173078 | orchestrator | Starting galaxy collection install process 2026-01-17 00:04:02.173124 | orchestrator | Process install dependency map 2026-01-17 00:04:02.173163 | orchestrator | Starting collection install process 2026-01-17 00:04:02.173199 | orchestrator | Installing 'osism.services:999.0.0' to '/home/zuul-testbed06/.ansible/collections/ansible_collections/osism/services' 2026-01-17 00:04:02.173234 | orchestrator | Created collection for osism.services:999.0.0 at /home/zuul-testbed06/.ansible/collections/ansible_collections/osism/services 2026-01-17 00:04:02.173269 | orchestrator | osism.services:999.0.0 was installed successfully 2026-01-17 00:04:02.173323 | orchestrator | ok: Item: services Runtime: 0:00:00.731042 2026-01-17 00:04:02.195418 | 2026-01-17 00:04:02.195598 | TASK [Wait up to 300 seconds for port 22 to become open and contain "OpenSSH"] 2026-01-17 00:04:12.749040 | orchestrator | ok 2026-01-17 00:04:12.759526 | 2026-01-17 00:04:12.759655 | TASK [Wait a little longer for the manager so that everything is ready] 2026-01-17 00:05:12.816248 | orchestrator | ok 2026-01-17 00:05:12.831174 | 2026-01-17 00:05:12.831363 | TASK [Fetch manager ssh hostkey] 2026-01-17 00:05:14.417935 | orchestrator | Output suppressed because no_log was given 2026-01-17 00:05:14.433123 | 2026-01-17 00:05:14.433296 | TASK [Get ssh keypair from terraform environment] 2026-01-17 00:05:14.968527 | orchestrator | ok: Runtime: 0:00:00.005643 2026-01-17 00:05:14.986477 | 2026-01-17 00:05:14.986815 | TASK [Point out that the following task takes some time and does not give any output] 2026-01-17 00:05:15.037770 | orchestrator | ok: The task 'Run manager part 0' runs an Ansible playbook on the manager. There is no further output of this here. It takes a few minutes for this task to complete. 2026-01-17 00:05:15.047777 | 2026-01-17 00:05:15.047927 | TASK [Run manager part 0] 2026-01-17 00:05:15.892150 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2026-01-17 00:05:15.937856 | orchestrator | 2026-01-17 00:05:15.937895 | orchestrator | PLAY [Wait for cloud-init to finish] ******************************************* 2026-01-17 00:05:15.937902 | orchestrator | 2026-01-17 00:05:15.937914 | orchestrator | TASK [Check /var/lib/cloud/instance/boot-finished] ***************************** 2026-01-17 00:05:17.936060 | orchestrator | ok: [testbed-manager] 2026-01-17 00:05:17.936104 | orchestrator | 2026-01-17 00:05:17.936126 | orchestrator | PLAY [Run manager part 0] ****************************************************** 2026-01-17 00:05:17.936137 | orchestrator | 2026-01-17 00:05:17.936156 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-01-17 00:05:19.762007 | orchestrator | ok: [testbed-manager] 2026-01-17 00:05:19.762071 | orchestrator | 2026-01-17 00:05:19.762085 | orchestrator | TASK [Get home directory of ansible user] ************************************** 2026-01-17 00:05:20.396463 | orchestrator | ok: [testbed-manager] 2026-01-17 00:05:20.396504 | orchestrator | 2026-01-17 00:05:20.396509 | orchestrator | TASK [Set repo_path fact] ****************************************************** 2026-01-17 00:05:20.434595 | orchestrator | skipping: [testbed-manager] 2026-01-17 00:05:20.434627 | orchestrator | 2026-01-17 00:05:20.434634 | orchestrator | TASK [Update package cache] **************************************************** 2026-01-17 00:05:20.457194 | orchestrator | skipping: [testbed-manager] 2026-01-17 00:05:20.457281 | orchestrator | 2026-01-17 00:05:20.457289 | orchestrator | TASK [Install required packages] *********************************************** 2026-01-17 00:05:20.482353 | orchestrator | skipping: [testbed-manager] 2026-01-17 00:05:20.482381 | orchestrator | 2026-01-17 00:05:20.482386 | orchestrator | TASK [Remove some python packages] ********************************************* 2026-01-17 00:05:20.509422 | orchestrator | skipping: [testbed-manager] 2026-01-17 00:05:20.509458 | orchestrator | 2026-01-17 00:05:20.509463 | orchestrator | TASK [Set venv_command fact (RedHat)] ****************************************** 2026-01-17 00:05:20.534409 | orchestrator | skipping: [testbed-manager] 2026-01-17 00:05:20.534481 | orchestrator | 2026-01-17 00:05:20.534533 | orchestrator | TASK [Fail if Ubuntu version is lower than 24.04] ****************************** 2026-01-17 00:05:20.560540 | orchestrator | skipping: [testbed-manager] 2026-01-17 00:05:20.560572 | orchestrator | 2026-01-17 00:05:20.560580 | orchestrator | TASK [Fail if Debian version is lower than 12] ********************************* 2026-01-17 00:05:20.582985 | orchestrator | skipping: [testbed-manager] 2026-01-17 00:05:20.583015 | orchestrator | 2026-01-17 00:05:20.583023 | orchestrator | TASK [Set APT options on manager] ********************************************** 2026-01-17 00:05:21.300069 | orchestrator | changed: [testbed-manager] 2026-01-17 00:05:21.300103 | orchestrator | 2026-01-17 00:05:21.300110 | orchestrator | TASK [Update APT cache and run dist-upgrade] *********************************** 2026-01-17 00:08:14.646083 | orchestrator | changed: [testbed-manager] 2026-01-17 00:08:14.646130 | orchestrator | 2026-01-17 00:08:14.646141 | orchestrator | TASK [Install HWE kernel package on Ubuntu] ************************************ 2026-01-17 00:09:39.874534 | orchestrator | changed: [testbed-manager] 2026-01-17 00:09:39.874575 | orchestrator | 2026-01-17 00:09:39.874582 | orchestrator | TASK [Install required packages] *********************************************** 2026-01-17 00:10:07.141772 | orchestrator | changed: [testbed-manager] 2026-01-17 00:10:07.141862 | orchestrator | 2026-01-17 00:10:07.141878 | orchestrator | TASK [Remove some python packages] ********************************************* 2026-01-17 00:10:17.624960 | orchestrator | changed: [testbed-manager] 2026-01-17 00:10:17.625051 | orchestrator | 2026-01-17 00:10:17.625068 | orchestrator | TASK [Set venv_command fact (Debian)] ****************************************** 2026-01-17 00:10:17.679995 | orchestrator | ok: [testbed-manager] 2026-01-17 00:10:17.680075 | orchestrator | 2026-01-17 00:10:17.680090 | orchestrator | TASK [Get current user] ******************************************************** 2026-01-17 00:10:18.493381 | orchestrator | ok: [testbed-manager] 2026-01-17 00:10:18.493464 | orchestrator | 2026-01-17 00:10:18.493482 | orchestrator | TASK [Create venv directory] *************************************************** 2026-01-17 00:10:19.303132 | orchestrator | changed: [testbed-manager] 2026-01-17 00:10:19.303227 | orchestrator | 2026-01-17 00:10:19.303243 | orchestrator | TASK [Install netaddr in venv] ************************************************* 2026-01-17 00:10:26.027211 | orchestrator | changed: [testbed-manager] 2026-01-17 00:10:26.027277 | orchestrator | 2026-01-17 00:10:26.027312 | orchestrator | TASK [Install ansible-core in venv] ******************************************** 2026-01-17 00:10:32.330121 | orchestrator | changed: [testbed-manager] 2026-01-17 00:10:32.330177 | orchestrator | 2026-01-17 00:10:32.330185 | orchestrator | TASK [Install requests >= 2.32.2] ********************************************** 2026-01-17 00:10:34.973548 | orchestrator | changed: [testbed-manager] 2026-01-17 00:10:34.973583 | orchestrator | 2026-01-17 00:10:34.973589 | orchestrator | TASK [Install docker >= 7.1.0] ************************************************* 2026-01-17 00:10:36.767277 | orchestrator | changed: [testbed-manager] 2026-01-17 00:10:36.767317 | orchestrator | 2026-01-17 00:10:36.767324 | orchestrator | TASK [Create directories in /opt/src] ****************************************** 2026-01-17 00:10:37.940011 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-commons) 2026-01-17 00:10:37.940053 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-services) 2026-01-17 00:10:37.940060 | orchestrator | 2026-01-17 00:10:37.940066 | orchestrator | TASK [Sync sources in /opt/src] ************************************************ 2026-01-17 00:10:38.007838 | orchestrator | [DEPRECATION WARNING]: The connection's stdin object is deprecated. Call 2026-01-17 00:10:38.007894 | orchestrator | display.prompt_until(msg) instead. This feature will be removed in version 2026-01-17 00:10:38.007906 | orchestrator | 2.19. Deprecation warnings can be disabled by setting 2026-01-17 00:10:38.007915 | orchestrator | deprecation_warnings=False in ansible.cfg. 2026-01-17 00:10:46.873965 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-commons) 2026-01-17 00:10:46.874164 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-services) 2026-01-17 00:10:46.874175 | orchestrator | 2026-01-17 00:10:46.874181 | orchestrator | TASK [Create /usr/share/ansible directory] ************************************* 2026-01-17 00:10:47.455447 | orchestrator | changed: [testbed-manager] 2026-01-17 00:10:47.455550 | orchestrator | 2026-01-17 00:10:47.455577 | orchestrator | TASK [Install collections from Ansible galaxy] ********************************* 2026-01-17 00:13:06.509500 | orchestrator | changed: [testbed-manager] => (item=ansible.netcommon) 2026-01-17 00:13:06.509582 | orchestrator | changed: [testbed-manager] => (item=ansible.posix) 2026-01-17 00:13:06.509598 | orchestrator | changed: [testbed-manager] => (item=community.docker>=3.10.2) 2026-01-17 00:13:06.509611 | orchestrator | 2026-01-17 00:13:06.509624 | orchestrator | TASK [Install local collections] *********************************************** 2026-01-17 00:13:09.114513 | orchestrator | changed: [testbed-manager] => (item=ansible-collection-commons) 2026-01-17 00:13:09.114571 | orchestrator | changed: [testbed-manager] => (item=ansible-collection-services) 2026-01-17 00:13:09.114583 | orchestrator | 2026-01-17 00:13:09.114592 | orchestrator | PLAY [Create operator user] **************************************************** 2026-01-17 00:13:09.114601 | orchestrator | 2026-01-17 00:13:09.114610 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-01-17 00:13:10.641616 | orchestrator | ok: [testbed-manager] 2026-01-17 00:13:10.641651 | orchestrator | 2026-01-17 00:13:10.641658 | orchestrator | TASK [osism.commons.operator : Gather variables for each operating system] ***** 2026-01-17 00:13:10.700057 | orchestrator | ok: [testbed-manager] 2026-01-17 00:13:10.700109 | orchestrator | 2026-01-17 00:13:10.700119 | orchestrator | TASK [osism.commons.operator : Set operator_groups variable to default value] *** 2026-01-17 00:13:10.779342 | orchestrator | ok: [testbed-manager] 2026-01-17 00:13:10.779402 | orchestrator | 2026-01-17 00:13:10.779418 | orchestrator | TASK [osism.commons.operator : Create operator group] ************************** 2026-01-17 00:13:12.100440 | orchestrator | changed: [testbed-manager] 2026-01-17 00:13:12.101159 | orchestrator | 2026-01-17 00:13:12.101192 | orchestrator | TASK [osism.commons.operator : Create user] ************************************ 2026-01-17 00:13:12.925319 | orchestrator | changed: [testbed-manager] 2026-01-17 00:13:12.925383 | orchestrator | 2026-01-17 00:13:12.925397 | orchestrator | TASK [osism.commons.operator : Add user to additional groups] ****************** 2026-01-17 00:13:14.449552 | orchestrator | changed: [testbed-manager] => (item=adm) 2026-01-17 00:13:14.449610 | orchestrator | changed: [testbed-manager] => (item=sudo) 2026-01-17 00:13:14.449622 | orchestrator | 2026-01-17 00:13:14.449648 | orchestrator | TASK [osism.commons.operator : Copy user sudoers file] ************************* 2026-01-17 00:13:15.881170 | orchestrator | changed: [testbed-manager] 2026-01-17 00:13:15.881220 | orchestrator | 2026-01-17 00:13:15.881227 | orchestrator | TASK [osism.commons.operator : Set language variables in .bashrc configuration file] *** 2026-01-17 00:13:17.796491 | orchestrator | changed: [testbed-manager] => (item=export LANGUAGE=C.UTF-8) 2026-01-17 00:13:17.796574 | orchestrator | changed: [testbed-manager] => (item=export LANG=C.UTF-8) 2026-01-17 00:13:17.796588 | orchestrator | changed: [testbed-manager] => (item=export LC_ALL=C.UTF-8) 2026-01-17 00:13:17.796598 | orchestrator | 2026-01-17 00:13:17.796609 | orchestrator | TASK [osism.commons.operator : Set custom environment variables in .bashrc configuration file] *** 2026-01-17 00:13:17.856862 | orchestrator | skipping: [testbed-manager] 2026-01-17 00:13:17.856902 | orchestrator | 2026-01-17 00:13:17.856909 | orchestrator | TASK [osism.commons.operator : Set custom PS1 prompt in .bashrc configuration file] *** 2026-01-17 00:13:17.952210 | orchestrator | skipping: [testbed-manager] 2026-01-17 00:13:17.952273 | orchestrator | 2026-01-17 00:13:17.952284 | orchestrator | TASK [osism.commons.operator : Create .ssh directory] ************************** 2026-01-17 00:13:18.576227 | orchestrator | changed: [testbed-manager] 2026-01-17 00:13:18.576331 | orchestrator | 2026-01-17 00:13:18.576354 | orchestrator | TASK [osism.commons.operator : Check number of SSH authorized keys] ************ 2026-01-17 00:13:18.652676 | orchestrator | skipping: [testbed-manager] 2026-01-17 00:13:18.652762 | orchestrator | 2026-01-17 00:13:18.652777 | orchestrator | TASK [osism.commons.operator : Set ssh authorized keys] ************************ 2026-01-17 00:13:19.573632 | orchestrator | changed: [testbed-manager] => (item=None) 2026-01-17 00:13:19.573724 | orchestrator | changed: [testbed-manager] 2026-01-17 00:13:19.573738 | orchestrator | 2026-01-17 00:13:19.573747 | orchestrator | TASK [osism.commons.operator : Delete ssh authorized keys] ********************* 2026-01-17 00:13:19.612503 | orchestrator | skipping: [testbed-manager] 2026-01-17 00:13:19.612561 | orchestrator | 2026-01-17 00:13:19.612570 | orchestrator | TASK [osism.commons.operator : Set authorized GitHub accounts] ***************** 2026-01-17 00:13:19.654328 | orchestrator | skipping: [testbed-manager] 2026-01-17 00:13:19.654406 | orchestrator | 2026-01-17 00:13:19.654417 | orchestrator | TASK [osism.commons.operator : Delete authorized GitHub accounts] ************** 2026-01-17 00:13:19.695023 | orchestrator | skipping: [testbed-manager] 2026-01-17 00:13:19.695101 | orchestrator | 2026-01-17 00:13:19.695127 | orchestrator | TASK [osism.commons.operator : Set password] *********************************** 2026-01-17 00:13:19.783934 | orchestrator | skipping: [testbed-manager] 2026-01-17 00:13:19.784010 | orchestrator | 2026-01-17 00:13:19.784026 | orchestrator | TASK [osism.commons.operator : Unset & lock password] ************************** 2026-01-17 00:13:20.556917 | orchestrator | ok: [testbed-manager] 2026-01-17 00:13:20.556952 | orchestrator | 2026-01-17 00:13:20.556959 | orchestrator | PLAY [Run manager part 0] ****************************************************** 2026-01-17 00:13:20.556967 | orchestrator | 2026-01-17 00:13:20.556973 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-01-17 00:13:22.016655 | orchestrator | ok: [testbed-manager] 2026-01-17 00:13:22.016749 | orchestrator | 2026-01-17 00:13:22.016766 | orchestrator | TASK [Recursively change ownership of /opt/venv] ******************************* 2026-01-17 00:13:23.013915 | orchestrator | changed: [testbed-manager] 2026-01-17 00:13:23.014005 | orchestrator | 2026-01-17 00:13:23.014077 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-17 00:13:23.014095 | orchestrator | testbed-manager : ok=33 changed=23 unreachable=0 failed=0 skipped=14 rescued=0 ignored=0 2026-01-17 00:13:23.014128 | orchestrator | 2026-01-17 00:13:23.406677 | orchestrator | ok: Runtime: 0:08:07.799709 2026-01-17 00:13:23.424349 | 2026-01-17 00:13:23.424502 | TASK [Point out that the log in on the manager is now possible] 2026-01-17 00:13:23.462066 | orchestrator | ok: It is now already possible to log in to the manager with 'make login'. 2026-01-17 00:13:23.473446 | 2026-01-17 00:13:23.473584 | TASK [Point out that the following task takes some time and does not give any output] 2026-01-17 00:13:23.512272 | orchestrator | ok: The task 'Run manager part 1 + 2' runs an Ansible playbook on the manager. There is no further output of this here. It takes a few minuts for this task to complete. 2026-01-17 00:13:23.522709 | 2026-01-17 00:13:23.522888 | TASK [Run manager part 1 + 2] 2026-01-17 00:13:26.536931 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2026-01-17 00:13:26.684045 | orchestrator | 2026-01-17 00:13:26.684164 | orchestrator | PLAY [Run manager part 1] ****************************************************** 2026-01-17 00:13:26.684182 | orchestrator | 2026-01-17 00:13:26.684208 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-01-17 00:13:29.937322 | orchestrator | ok: [testbed-manager] 2026-01-17 00:13:29.937386 | orchestrator | 2026-01-17 00:13:29.937422 | orchestrator | TASK [Set venv_command fact (RedHat)] ****************************************** 2026-01-17 00:13:29.976623 | orchestrator | skipping: [testbed-manager] 2026-01-17 00:13:29.976665 | orchestrator | 2026-01-17 00:13:29.976676 | orchestrator | TASK [Set venv_command fact (Debian)] ****************************************** 2026-01-17 00:13:30.024150 | orchestrator | ok: [testbed-manager] 2026-01-17 00:13:30.024192 | orchestrator | 2026-01-17 00:13:30.024203 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2026-01-17 00:13:30.078988 | orchestrator | ok: [testbed-manager] 2026-01-17 00:13:30.079034 | orchestrator | 2026-01-17 00:13:30.079045 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2026-01-17 00:13:30.162315 | orchestrator | ok: [testbed-manager] 2026-01-17 00:13:30.162358 | orchestrator | 2026-01-17 00:13:30.162368 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2026-01-17 00:13:30.233944 | orchestrator | ok: [testbed-manager] 2026-01-17 00:13:30.233992 | orchestrator | 2026-01-17 00:13:30.234006 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2026-01-17 00:13:30.277785 | orchestrator | included: /home/zuul-testbed06/.ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-manager 2026-01-17 00:13:30.277818 | orchestrator | 2026-01-17 00:13:30.277823 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2026-01-17 00:13:31.068887 | orchestrator | ok: [testbed-manager] 2026-01-17 00:13:31.069191 | orchestrator | 2026-01-17 00:13:31.069207 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2026-01-17 00:13:31.112869 | orchestrator | skipping: [testbed-manager] 2026-01-17 00:13:31.112904 | orchestrator | 2026-01-17 00:13:31.112910 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2026-01-17 00:13:32.548499 | orchestrator | changed: [testbed-manager] 2026-01-17 00:13:32.548541 | orchestrator | 2026-01-17 00:13:32.548550 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2026-01-17 00:13:33.157472 | orchestrator | ok: [testbed-manager] 2026-01-17 00:13:33.157510 | orchestrator | 2026-01-17 00:13:33.157517 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2026-01-17 00:13:34.363766 | orchestrator | changed: [testbed-manager] 2026-01-17 00:13:34.363810 | orchestrator | 2026-01-17 00:13:34.363821 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2026-01-17 00:13:52.445113 | orchestrator | changed: [testbed-manager] 2026-01-17 00:13:52.445185 | orchestrator | 2026-01-17 00:13:52.445196 | orchestrator | TASK [Get home directory of ansible user] ************************************** 2026-01-17 00:13:53.172897 | orchestrator | ok: [testbed-manager] 2026-01-17 00:13:53.172963 | orchestrator | 2026-01-17 00:13:53.172973 | orchestrator | TASK [Set repo_path fact] ****************************************************** 2026-01-17 00:13:53.237868 | orchestrator | skipping: [testbed-manager] 2026-01-17 00:13:53.237935 | orchestrator | 2026-01-17 00:13:53.237950 | orchestrator | TASK [Copy SSH public key] ***************************************************** 2026-01-17 00:13:54.305340 | orchestrator | changed: [testbed-manager] 2026-01-17 00:13:54.305413 | orchestrator | 2026-01-17 00:13:54.305435 | orchestrator | TASK [Copy SSH private key] **************************************************** 2026-01-17 00:13:55.291955 | orchestrator | changed: [testbed-manager] 2026-01-17 00:13:55.292089 | orchestrator | 2026-01-17 00:13:55.292108 | orchestrator | TASK [Create configuration directory] ****************************************** 2026-01-17 00:13:55.886520 | orchestrator | changed: [testbed-manager] 2026-01-17 00:13:55.886601 | orchestrator | 2026-01-17 00:13:55.886617 | orchestrator | TASK [Copy testbed repo] ******************************************************* 2026-01-17 00:13:55.930125 | orchestrator | [DEPRECATION WARNING]: The connection's stdin object is deprecated. Call 2026-01-17 00:13:55.930236 | orchestrator | display.prompt_until(msg) instead. This feature will be removed in version 2026-01-17 00:13:55.930252 | orchestrator | 2.19. Deprecation warnings can be disabled by setting 2026-01-17 00:13:55.930265 | orchestrator | deprecation_warnings=False in ansible.cfg. 2026-01-17 00:13:58.268782 | orchestrator | changed: [testbed-manager] 2026-01-17 00:13:58.268864 | orchestrator | 2026-01-17 00:13:58.268874 | orchestrator | TASK [Install python requirements in venv] ************************************* 2026-01-17 00:14:07.690323 | orchestrator | ok: [testbed-manager] => (item=Jinja2) 2026-01-17 00:14:07.690396 | orchestrator | ok: [testbed-manager] => (item=PyYAML) 2026-01-17 00:14:07.690408 | orchestrator | ok: [testbed-manager] => (item=packaging) 2026-01-17 00:14:07.690416 | orchestrator | changed: [testbed-manager] => (item=python-gilt==1.2.3) 2026-01-17 00:14:07.690430 | orchestrator | ok: [testbed-manager] => (item=requests>=2.32.2) 2026-01-17 00:14:07.690437 | orchestrator | ok: [testbed-manager] => (item=docker>=7.1.0) 2026-01-17 00:14:07.690444 | orchestrator | 2026-01-17 00:14:07.690451 | orchestrator | TASK [Copy testbed custom CA certificate on Debian/Ubuntu] ********************* 2026-01-17 00:14:08.739599 | orchestrator | changed: [testbed-manager] 2026-01-17 00:14:08.739638 | orchestrator | 2026-01-17 00:14:08.739645 | orchestrator | TASK [Copy testbed custom CA certificate on CentOS] **************************** 2026-01-17 00:14:08.785524 | orchestrator | skipping: [testbed-manager] 2026-01-17 00:14:08.785563 | orchestrator | 2026-01-17 00:14:08.785571 | orchestrator | TASK [Run update-ca-certificates on Debian/Ubuntu] ***************************** 2026-01-17 00:14:12.196439 | orchestrator | changed: [testbed-manager] 2026-01-17 00:14:12.196537 | orchestrator | 2026-01-17 00:14:12.196554 | orchestrator | TASK [Run update-ca-trust on RedHat] ******************************************* 2026-01-17 00:14:12.244536 | orchestrator | skipping: [testbed-manager] 2026-01-17 00:14:12.244608 | orchestrator | 2026-01-17 00:14:12.244622 | orchestrator | TASK [Run manager part 2] ****************************************************** 2026-01-17 00:15:53.463952 | orchestrator | changed: [testbed-manager] 2026-01-17 00:15:53.464072 | orchestrator | 2026-01-17 00:15:53.464105 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2026-01-17 00:15:54.668636 | orchestrator | ok: [testbed-manager] 2026-01-17 00:15:54.668684 | orchestrator | 2026-01-17 00:15:54.668694 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-17 00:15:54.668702 | orchestrator | testbed-manager : ok=21 changed=11 unreachable=0 failed=0 skipped=5 rescued=0 ignored=0 2026-01-17 00:15:54.668708 | orchestrator | 2026-01-17 00:15:55.166963 | orchestrator | ok: Runtime: 0:02:30.884577 2026-01-17 00:15:55.179842 | 2026-01-17 00:15:55.180021 | TASK [Reboot manager] 2026-01-17 00:15:56.718612 | orchestrator | ok: Runtime: 0:00:01.005758 2026-01-17 00:15:56.735754 | 2026-01-17 00:15:56.735940 | TASK [Wait up to 300 seconds for port 22 to become open and contain "OpenSSH"] 2026-01-17 00:16:13.183775 | orchestrator | ok 2026-01-17 00:16:13.194503 | 2026-01-17 00:16:13.194660 | TASK [Wait a little longer for the manager so that everything is ready] 2026-01-17 00:17:13.231709 | orchestrator | ok 2026-01-17 00:17:13.239207 | 2026-01-17 00:17:13.239330 | TASK [Deploy manager + bootstrap nodes] 2026-01-17 00:17:15.916661 | orchestrator | 2026-01-17 00:17:15.916851 | orchestrator | # DEPLOY MANAGER 2026-01-17 00:17:15.916875 | orchestrator | 2026-01-17 00:17:15.916890 | orchestrator | + set -e 2026-01-17 00:17:15.916903 | orchestrator | + echo 2026-01-17 00:17:15.916917 | orchestrator | + echo '# DEPLOY MANAGER' 2026-01-17 00:17:15.916933 | orchestrator | + echo 2026-01-17 00:17:15.916983 | orchestrator | + cat /opt/manager-vars.sh 2026-01-17 00:17:15.920838 | orchestrator | export NUMBER_OF_NODES=6 2026-01-17 00:17:15.920871 | orchestrator | 2026-01-17 00:17:15.920883 | orchestrator | export CEPH_VERSION=reef 2026-01-17 00:17:15.920896 | orchestrator | export CONFIGURATION_VERSION=main 2026-01-17 00:17:15.920908 | orchestrator | export MANAGER_VERSION=latest 2026-01-17 00:17:15.920931 | orchestrator | export OPENSTACK_VERSION=2024.2 2026-01-17 00:17:15.920942 | orchestrator | 2026-01-17 00:17:15.920960 | orchestrator | export ARA=false 2026-01-17 00:17:15.920971 | orchestrator | export DEPLOY_MODE=manager 2026-01-17 00:17:15.920988 | orchestrator | export TEMPEST=true 2026-01-17 00:17:15.921000 | orchestrator | export IS_ZUUL=true 2026-01-17 00:17:15.921010 | orchestrator | 2026-01-17 00:17:15.921028 | orchestrator | export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.65 2026-01-17 00:17:15.921040 | orchestrator | export EXTERNAL_API=false 2026-01-17 00:17:15.921051 | orchestrator | 2026-01-17 00:17:15.921100 | orchestrator | export IMAGE_USER=ubuntu 2026-01-17 00:17:15.921118 | orchestrator | export IMAGE_NODE_USER=ubuntu 2026-01-17 00:17:15.921130 | orchestrator | 2026-01-17 00:17:15.921141 | orchestrator | export CEPH_STACK=ceph-ansible 2026-01-17 00:17:15.921257 | orchestrator | 2026-01-17 00:17:15.921274 | orchestrator | + echo 2026-01-17 00:17:15.921287 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-01-17 00:17:15.922433 | orchestrator | ++ export INTERACTIVE=false 2026-01-17 00:17:15.922455 | orchestrator | ++ INTERACTIVE=false 2026-01-17 00:17:15.922468 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-01-17 00:17:15.922481 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-01-17 00:17:15.922776 | orchestrator | + source /opt/manager-vars.sh 2026-01-17 00:17:15.922797 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-01-17 00:17:15.922809 | orchestrator | ++ NUMBER_OF_NODES=6 2026-01-17 00:17:15.922829 | orchestrator | ++ export CEPH_VERSION=reef 2026-01-17 00:17:15.922840 | orchestrator | ++ CEPH_VERSION=reef 2026-01-17 00:17:15.922851 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-01-17 00:17:15.922862 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-01-17 00:17:15.922879 | orchestrator | ++ export MANAGER_VERSION=latest 2026-01-17 00:17:15.922931 | orchestrator | ++ MANAGER_VERSION=latest 2026-01-17 00:17:15.922967 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-01-17 00:17:15.923014 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-01-17 00:17:15.923037 | orchestrator | ++ export ARA=false 2026-01-17 00:17:15.923049 | orchestrator | ++ ARA=false 2026-01-17 00:17:15.923060 | orchestrator | ++ export DEPLOY_MODE=manager 2026-01-17 00:17:15.923070 | orchestrator | ++ DEPLOY_MODE=manager 2026-01-17 00:17:15.923086 | orchestrator | ++ export TEMPEST=true 2026-01-17 00:17:15.923098 | orchestrator | ++ TEMPEST=true 2026-01-17 00:17:15.923108 | orchestrator | ++ export IS_ZUUL=true 2026-01-17 00:17:15.923119 | orchestrator | ++ IS_ZUUL=true 2026-01-17 00:17:15.923130 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.65 2026-01-17 00:17:15.923140 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.65 2026-01-17 00:17:15.923151 | orchestrator | ++ export EXTERNAL_API=false 2026-01-17 00:17:15.923162 | orchestrator | ++ EXTERNAL_API=false 2026-01-17 00:17:15.923172 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-01-17 00:17:15.923183 | orchestrator | ++ IMAGE_USER=ubuntu 2026-01-17 00:17:15.923193 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-01-17 00:17:15.923204 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-01-17 00:17:15.923215 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-01-17 00:17:15.923229 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-01-17 00:17:15.923248 | orchestrator | + sudo ln -sf /opt/configuration/contrib/semver2.sh /usr/local/bin/semver 2026-01-17 00:17:15.986243 | orchestrator | + docker version 2026-01-17 00:17:16.322676 | orchestrator | Client: Docker Engine - Community 2026-01-17 00:17:16.322778 | orchestrator | Version: 27.5.1 2026-01-17 00:17:16.322793 | orchestrator | API version: 1.47 2026-01-17 00:17:16.322807 | orchestrator | Go version: go1.22.11 2026-01-17 00:17:16.322817 | orchestrator | Git commit: 9f9e405 2026-01-17 00:17:16.322829 | orchestrator | Built: Wed Jan 22 13:41:48 2025 2026-01-17 00:17:16.322840 | orchestrator | OS/Arch: linux/amd64 2026-01-17 00:17:16.322851 | orchestrator | Context: default 2026-01-17 00:17:16.322862 | orchestrator | 2026-01-17 00:17:16.322873 | orchestrator | Server: Docker Engine - Community 2026-01-17 00:17:16.322884 | orchestrator | Engine: 2026-01-17 00:17:16.322895 | orchestrator | Version: 27.5.1 2026-01-17 00:17:16.322907 | orchestrator | API version: 1.47 (minimum version 1.24) 2026-01-17 00:17:16.322949 | orchestrator | Go version: go1.22.11 2026-01-17 00:17:16.322960 | orchestrator | Git commit: 4c9b3b0 2026-01-17 00:17:16.322971 | orchestrator | Built: Wed Jan 22 13:41:48 2025 2026-01-17 00:17:16.322982 | orchestrator | OS/Arch: linux/amd64 2026-01-17 00:17:16.322993 | orchestrator | Experimental: false 2026-01-17 00:17:16.323004 | orchestrator | containerd: 2026-01-17 00:17:16.323015 | orchestrator | Version: v2.2.1 2026-01-17 00:17:16.323026 | orchestrator | GitCommit: dea7da592f5d1d2b7755e3a161be07f43fad8f75 2026-01-17 00:17:16.323037 | orchestrator | runc: 2026-01-17 00:17:16.323048 | orchestrator | Version: 1.3.4 2026-01-17 00:17:16.323059 | orchestrator | GitCommit: v1.3.4-0-gd6d73eb8 2026-01-17 00:17:16.323070 | orchestrator | docker-init: 2026-01-17 00:17:16.323081 | orchestrator | Version: 0.19.0 2026-01-17 00:17:16.323093 | orchestrator | GitCommit: de40ad0 2026-01-17 00:17:16.327094 | orchestrator | + sh -c /opt/configuration/scripts/deploy/000-manager.sh 2026-01-17 00:17:16.336342 | orchestrator | + set -e 2026-01-17 00:17:16.337362 | orchestrator | + source /opt/manager-vars.sh 2026-01-17 00:17:16.337457 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-01-17 00:17:16.337475 | orchestrator | ++ NUMBER_OF_NODES=6 2026-01-17 00:17:16.337487 | orchestrator | ++ export CEPH_VERSION=reef 2026-01-17 00:17:16.337498 | orchestrator | ++ CEPH_VERSION=reef 2026-01-17 00:17:16.337509 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-01-17 00:17:16.337521 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-01-17 00:17:16.337532 | orchestrator | ++ export MANAGER_VERSION=latest 2026-01-17 00:17:16.337543 | orchestrator | ++ MANAGER_VERSION=latest 2026-01-17 00:17:16.337554 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-01-17 00:17:16.337564 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-01-17 00:17:16.337575 | orchestrator | ++ export ARA=false 2026-01-17 00:17:16.337586 | orchestrator | ++ ARA=false 2026-01-17 00:17:16.337597 | orchestrator | ++ export DEPLOY_MODE=manager 2026-01-17 00:17:16.337608 | orchestrator | ++ DEPLOY_MODE=manager 2026-01-17 00:17:16.337618 | orchestrator | ++ export TEMPEST=true 2026-01-17 00:17:16.337677 | orchestrator | ++ TEMPEST=true 2026-01-17 00:17:16.337688 | orchestrator | ++ export IS_ZUUL=true 2026-01-17 00:17:16.337699 | orchestrator | ++ IS_ZUUL=true 2026-01-17 00:17:16.337710 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.65 2026-01-17 00:17:16.337720 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.65 2026-01-17 00:17:16.337731 | orchestrator | ++ export EXTERNAL_API=false 2026-01-17 00:17:16.337742 | orchestrator | ++ EXTERNAL_API=false 2026-01-17 00:17:16.337752 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-01-17 00:17:16.337763 | orchestrator | ++ IMAGE_USER=ubuntu 2026-01-17 00:17:16.337773 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-01-17 00:17:16.337784 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-01-17 00:17:16.337795 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-01-17 00:17:16.337805 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-01-17 00:17:16.337816 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-01-17 00:17:16.337827 | orchestrator | ++ export INTERACTIVE=false 2026-01-17 00:17:16.337838 | orchestrator | ++ INTERACTIVE=false 2026-01-17 00:17:16.337848 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-01-17 00:17:16.337864 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-01-17 00:17:16.337875 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2026-01-17 00:17:16.337897 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2026-01-17 00:17:16.337909 | orchestrator | + /opt/configuration/scripts/set-ceph-version.sh reef 2026-01-17 00:17:16.343094 | orchestrator | + set -e 2026-01-17 00:17:16.343135 | orchestrator | + VERSION=reef 2026-01-17 00:17:16.344366 | orchestrator | ++ grep '^ceph_version:' /opt/configuration/environments/manager/configuration.yml 2026-01-17 00:17:16.354783 | orchestrator | + [[ -n ceph_version: reef ]] 2026-01-17 00:17:16.354827 | orchestrator | + sed -i 's/ceph_version: .*/ceph_version: reef/g' /opt/configuration/environments/manager/configuration.yml 2026-01-17 00:17:16.362315 | orchestrator | + /opt/configuration/scripts/set-openstack-version.sh 2024.2 2026-01-17 00:17:16.369148 | orchestrator | + set -e 2026-01-17 00:17:16.369191 | orchestrator | + VERSION=2024.2 2026-01-17 00:17:16.369783 | orchestrator | ++ grep '^openstack_version:' /opt/configuration/environments/manager/configuration.yml 2026-01-17 00:17:16.373978 | orchestrator | + [[ -n openstack_version: 2024.2 ]] 2026-01-17 00:17:16.374054 | orchestrator | + sed -i 's/openstack_version: .*/openstack_version: 2024.2/g' /opt/configuration/environments/manager/configuration.yml 2026-01-17 00:17:16.379050 | orchestrator | + [[ ceph-ansible == \r\o\o\k ]] 2026-01-17 00:17:16.379861 | orchestrator | ++ semver latest 7.0.0 2026-01-17 00:17:16.438253 | orchestrator | + [[ -1 -ge 0 ]] 2026-01-17 00:17:16.438341 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2026-01-17 00:17:16.438354 | orchestrator | + echo 'enable_osism_kubernetes: true' 2026-01-17 00:17:16.438950 | orchestrator | ++ semver latest 10.0.0-0 2026-01-17 00:17:16.500011 | orchestrator | + [[ -1 -ge 0 ]] 2026-01-17 00:17:16.500386 | orchestrator | ++ semver 2024.2 2025.1 2026-01-17 00:17:16.557830 | orchestrator | + [[ -1 -ge 0 ]] 2026-01-17 00:17:16.557923 | orchestrator | + /opt/configuration/scripts/enable-resource-nodes.sh 2026-01-17 00:17:16.634222 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2026-01-17 00:17:16.636166 | orchestrator | + source /opt/venv/bin/activate 2026-01-17 00:17:16.637357 | orchestrator | ++ deactivate nondestructive 2026-01-17 00:17:16.637411 | orchestrator | ++ '[' -n '' ']' 2026-01-17 00:17:16.637423 | orchestrator | ++ '[' -n '' ']' 2026-01-17 00:17:16.637447 | orchestrator | ++ hash -r 2026-01-17 00:17:16.637458 | orchestrator | ++ '[' -n '' ']' 2026-01-17 00:17:16.637469 | orchestrator | ++ unset VIRTUAL_ENV 2026-01-17 00:17:16.637479 | orchestrator | ++ unset VIRTUAL_ENV_PROMPT 2026-01-17 00:17:16.637506 | orchestrator | ++ '[' '!' nondestructive = nondestructive ']' 2026-01-17 00:17:16.637527 | orchestrator | ++ '[' linux-gnu = cygwin ']' 2026-01-17 00:17:16.637538 | orchestrator | ++ '[' linux-gnu = msys ']' 2026-01-17 00:17:16.637549 | orchestrator | ++ export VIRTUAL_ENV=/opt/venv 2026-01-17 00:17:16.637560 | orchestrator | ++ VIRTUAL_ENV=/opt/venv 2026-01-17 00:17:16.637572 | orchestrator | ++ _OLD_VIRTUAL_PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-01-17 00:17:16.637583 | orchestrator | ++ PATH=/opt/venv/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-01-17 00:17:16.637594 | orchestrator | ++ export PATH 2026-01-17 00:17:16.637605 | orchestrator | ++ '[' -n '' ']' 2026-01-17 00:17:16.637616 | orchestrator | ++ '[' -z '' ']' 2026-01-17 00:17:16.637658 | orchestrator | ++ _OLD_VIRTUAL_PS1= 2026-01-17 00:17:16.637669 | orchestrator | ++ PS1='(venv) ' 2026-01-17 00:17:16.637680 | orchestrator | ++ export PS1 2026-01-17 00:17:16.637691 | orchestrator | ++ VIRTUAL_ENV_PROMPT='(venv) ' 2026-01-17 00:17:16.637703 | orchestrator | ++ export VIRTUAL_ENV_PROMPT 2026-01-17 00:17:16.637714 | orchestrator | ++ hash -r 2026-01-17 00:17:16.637744 | orchestrator | + ansible-playbook -i testbed-manager, --vault-password-file /opt/configuration/environments/.vault_pass /opt/configuration/ansible/manager-part-3.yml 2026-01-17 00:17:18.119773 | orchestrator | 2026-01-17 00:17:18.119885 | orchestrator | PLAY [Copy custom facts] ******************************************************* 2026-01-17 00:17:18.119902 | orchestrator | 2026-01-17 00:17:18.119914 | orchestrator | TASK [Create custom facts directory] ******************************************* 2026-01-17 00:17:18.735532 | orchestrator | ok: [testbed-manager] 2026-01-17 00:17:18.735645 | orchestrator | 2026-01-17 00:17:18.735660 | orchestrator | TASK [Copy fact files] ********************************************************* 2026-01-17 00:17:19.741851 | orchestrator | changed: [testbed-manager] 2026-01-17 00:17:19.741950 | orchestrator | 2026-01-17 00:17:19.741967 | orchestrator | PLAY [Before the deployment of the manager] ************************************ 2026-01-17 00:17:19.741979 | orchestrator | 2026-01-17 00:17:19.741990 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-01-17 00:17:23.149681 | orchestrator | ok: [testbed-manager] 2026-01-17 00:17:23.150643 | orchestrator | 2026-01-17 00:17:23.150691 | orchestrator | TASK [Get /opt/manager-vars.sh] ************************************************ 2026-01-17 00:17:23.192928 | orchestrator | ok: [testbed-manager] 2026-01-17 00:17:23.193025 | orchestrator | 2026-01-17 00:17:23.193043 | orchestrator | TASK [Add ara_server_mariadb_volume_type parameter] **************************** 2026-01-17 00:17:23.671555 | orchestrator | changed: [testbed-manager] 2026-01-17 00:17:23.671671 | orchestrator | 2026-01-17 00:17:23.671692 | orchestrator | TASK [Add netbox_enable parameter] ********************************************* 2026-01-17 00:17:23.708306 | orchestrator | skipping: [testbed-manager] 2026-01-17 00:17:23.708415 | orchestrator | 2026-01-17 00:17:23.708440 | orchestrator | TASK [Install HWE kernel package on Ubuntu] ************************************ 2026-01-17 00:17:24.075182 | orchestrator | changed: [testbed-manager] 2026-01-17 00:17:24.075264 | orchestrator | 2026-01-17 00:17:24.075274 | orchestrator | TASK [Use insecure glance configuration] *************************************** 2026-01-17 00:17:24.112537 | orchestrator | skipping: [testbed-manager] 2026-01-17 00:17:24.112633 | orchestrator | 2026-01-17 00:17:24.112644 | orchestrator | TASK [Check if /etc/OTC_region exist] ****************************************** 2026-01-17 00:17:24.448671 | orchestrator | ok: [testbed-manager] 2026-01-17 00:17:24.448775 | orchestrator | 2026-01-17 00:17:24.448792 | orchestrator | TASK [Add nova_compute_virt_type parameter] ************************************ 2026-01-17 00:17:24.571389 | orchestrator | skipping: [testbed-manager] 2026-01-17 00:17:24.571469 | orchestrator | 2026-01-17 00:17:24.571477 | orchestrator | PLAY [Apply role traefik] ****************************************************** 2026-01-17 00:17:24.571484 | orchestrator | 2026-01-17 00:17:24.571489 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-01-17 00:17:26.243410 | orchestrator | ok: [testbed-manager] 2026-01-17 00:17:26.243507 | orchestrator | 2026-01-17 00:17:26.243523 | orchestrator | TASK [Apply traefik role] ****************************************************** 2026-01-17 00:17:26.352895 | orchestrator | included: osism.services.traefik for testbed-manager 2026-01-17 00:17:26.352984 | orchestrator | 2026-01-17 00:17:26.352999 | orchestrator | TASK [osism.services.traefik : Include config tasks] *************************** 2026-01-17 00:17:26.410580 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/traefik/tasks/config.yml for testbed-manager 2026-01-17 00:17:26.410693 | orchestrator | 2026-01-17 00:17:26.410705 | orchestrator | TASK [osism.services.traefik : Create required directories] ******************** 2026-01-17 00:17:27.551135 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik) 2026-01-17 00:17:27.551255 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik/certificates) 2026-01-17 00:17:27.551281 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik/configuration) 2026-01-17 00:17:27.551301 | orchestrator | 2026-01-17 00:17:27.551322 | orchestrator | TASK [osism.services.traefik : Copy configuration files] *********************** 2026-01-17 00:17:29.457783 | orchestrator | changed: [testbed-manager] => (item=traefik.yml) 2026-01-17 00:17:29.457886 | orchestrator | changed: [testbed-manager] => (item=traefik.env) 2026-01-17 00:17:29.457903 | orchestrator | changed: [testbed-manager] => (item=certificates.yml) 2026-01-17 00:17:29.457915 | orchestrator | 2026-01-17 00:17:29.457927 | orchestrator | TASK [osism.services.traefik : Copy certificate cert files] ******************** 2026-01-17 00:17:30.131628 | orchestrator | changed: [testbed-manager] => (item=None) 2026-01-17 00:17:30.131693 | orchestrator | changed: [testbed-manager] 2026-01-17 00:17:30.131700 | orchestrator | 2026-01-17 00:17:30.131705 | orchestrator | TASK [osism.services.traefik : Copy certificate key files] ********************* 2026-01-17 00:17:30.814336 | orchestrator | changed: [testbed-manager] => (item=None) 2026-01-17 00:17:30.814445 | orchestrator | changed: [testbed-manager] 2026-01-17 00:17:30.814467 | orchestrator | 2026-01-17 00:17:30.814485 | orchestrator | TASK [osism.services.traefik : Copy dynamic configuration] ********************* 2026-01-17 00:17:30.870466 | orchestrator | skipping: [testbed-manager] 2026-01-17 00:17:30.870533 | orchestrator | 2026-01-17 00:17:30.870542 | orchestrator | TASK [osism.services.traefik : Remove dynamic configuration] ******************* 2026-01-17 00:17:31.229541 | orchestrator | ok: [testbed-manager] 2026-01-17 00:17:31.229718 | orchestrator | 2026-01-17 00:17:31.229747 | orchestrator | TASK [osism.services.traefik : Include service tasks] ************************** 2026-01-17 00:17:31.315803 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/traefik/tasks/service.yml for testbed-manager 2026-01-17 00:17:31.315896 | orchestrator | 2026-01-17 00:17:31.315912 | orchestrator | TASK [osism.services.traefik : Create traefik external network] **************** 2026-01-17 00:17:32.465357 | orchestrator | changed: [testbed-manager] 2026-01-17 00:17:32.465476 | orchestrator | 2026-01-17 00:17:32.465499 | orchestrator | TASK [osism.services.traefik : Copy docker-compose.yml file] ******************* 2026-01-17 00:17:33.383666 | orchestrator | changed: [testbed-manager] 2026-01-17 00:17:33.383784 | orchestrator | 2026-01-17 00:17:33.383801 | orchestrator | TASK [osism.services.traefik : Manage traefik service] ************************* 2026-01-17 00:17:44.453023 | orchestrator | changed: [testbed-manager] 2026-01-17 00:17:44.453132 | orchestrator | 2026-01-17 00:17:44.453146 | orchestrator | RUNNING HANDLER [osism.services.traefik : Restart traefik service] ************* 2026-01-17 00:17:44.507450 | orchestrator | skipping: [testbed-manager] 2026-01-17 00:17:44.507551 | orchestrator | 2026-01-17 00:17:44.507567 | orchestrator | PLAY [Deploy manager service] ************************************************** 2026-01-17 00:17:44.507582 | orchestrator | 2026-01-17 00:17:44.507711 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-01-17 00:17:46.336176 | orchestrator | ok: [testbed-manager] 2026-01-17 00:17:46.336270 | orchestrator | 2026-01-17 00:17:46.336287 | orchestrator | TASK [Apply manager role] ****************************************************** 2026-01-17 00:17:46.477836 | orchestrator | included: osism.services.manager for testbed-manager 2026-01-17 00:17:46.477939 | orchestrator | 2026-01-17 00:17:46.477955 | orchestrator | TASK [osism.services.manager : Include install tasks] ************************** 2026-01-17 00:17:46.538710 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/install-Debian-family.yml for testbed-manager 2026-01-17 00:17:46.538805 | orchestrator | 2026-01-17 00:17:46.538819 | orchestrator | TASK [osism.services.manager : Install required packages] ********************** 2026-01-17 00:17:49.513221 | orchestrator | ok: [testbed-manager] 2026-01-17 00:17:49.513348 | orchestrator | 2026-01-17 00:17:49.513365 | orchestrator | TASK [osism.services.manager : Gather variables for each operating system] ***** 2026-01-17 00:17:49.562661 | orchestrator | ok: [testbed-manager] 2026-01-17 00:17:49.562748 | orchestrator | 2026-01-17 00:17:49.562762 | orchestrator | TASK [osism.services.manager : Include config tasks] *************************** 2026-01-17 00:17:49.731901 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config.yml for testbed-manager 2026-01-17 00:17:49.732027 | orchestrator | 2026-01-17 00:17:49.732055 | orchestrator | TASK [osism.services.manager : Create required directories] ******************** 2026-01-17 00:17:52.883055 | orchestrator | changed: [testbed-manager] => (item=/opt/ansible) 2026-01-17 00:17:52.883152 | orchestrator | changed: [testbed-manager] => (item=/opt/archive) 2026-01-17 00:17:52.883167 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/configuration) 2026-01-17 00:17:52.883180 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/data) 2026-01-17 00:17:52.883191 | orchestrator | ok: [testbed-manager] => (item=/opt/manager) 2026-01-17 00:17:52.883202 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/secrets) 2026-01-17 00:17:52.883213 | orchestrator | changed: [testbed-manager] => (item=/opt/ansible/secrets) 2026-01-17 00:17:52.883224 | orchestrator | changed: [testbed-manager] => (item=/opt/state) 2026-01-17 00:17:52.883235 | orchestrator | 2026-01-17 00:17:52.883247 | orchestrator | TASK [osism.services.manager : Copy all environment file] ********************** 2026-01-17 00:17:53.579713 | orchestrator | changed: [testbed-manager] 2026-01-17 00:17:53.579806 | orchestrator | 2026-01-17 00:17:53.579822 | orchestrator | TASK [osism.services.manager : Copy client environment file] ******************* 2026-01-17 00:17:54.299964 | orchestrator | changed: [testbed-manager] 2026-01-17 00:17:54.300055 | orchestrator | 2026-01-17 00:17:54.300071 | orchestrator | TASK [osism.services.manager : Include ara config tasks] *********************** 2026-01-17 00:17:54.393987 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-ara.yml for testbed-manager 2026-01-17 00:17:54.394119 | orchestrator | 2026-01-17 00:17:54.394133 | orchestrator | TASK [osism.services.manager : Copy ARA environment files] ********************* 2026-01-17 00:17:55.755075 | orchestrator | changed: [testbed-manager] => (item=ara) 2026-01-17 00:17:55.755175 | orchestrator | changed: [testbed-manager] => (item=ara-server) 2026-01-17 00:17:55.755190 | orchestrator | 2026-01-17 00:17:55.755203 | orchestrator | TASK [osism.services.manager : Copy MariaDB environment file] ****************** 2026-01-17 00:17:56.433491 | orchestrator | changed: [testbed-manager] 2026-01-17 00:17:56.433608 | orchestrator | 2026-01-17 00:17:56.433626 | orchestrator | TASK [osism.services.manager : Include vault config tasks] ********************* 2026-01-17 00:17:56.493199 | orchestrator | skipping: [testbed-manager] 2026-01-17 00:17:56.493283 | orchestrator | 2026-01-17 00:17:56.493298 | orchestrator | TASK [osism.services.manager : Include frontend config tasks] ****************** 2026-01-17 00:17:56.583134 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-frontend.yml for testbed-manager 2026-01-17 00:17:56.583226 | orchestrator | 2026-01-17 00:17:56.583243 | orchestrator | TASK [osism.services.manager : Copy frontend environment file] ***************** 2026-01-17 00:17:57.352618 | orchestrator | changed: [testbed-manager] 2026-01-17 00:17:57.352711 | orchestrator | 2026-01-17 00:17:57.352759 | orchestrator | TASK [osism.services.manager : Include ansible config tasks] ******************* 2026-01-17 00:17:57.422157 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-ansible.yml for testbed-manager 2026-01-17 00:17:57.422226 | orchestrator | 2026-01-17 00:17:57.422235 | orchestrator | TASK [osism.services.manager : Copy private ssh keys] ************************** 2026-01-17 00:17:58.901872 | orchestrator | changed: [testbed-manager] => (item=None) 2026-01-17 00:17:58.902090 | orchestrator | changed: [testbed-manager] => (item=None) 2026-01-17 00:17:58.902115 | orchestrator | changed: [testbed-manager] 2026-01-17 00:17:58.902128 | orchestrator | 2026-01-17 00:17:58.902140 | orchestrator | TASK [osism.services.manager : Copy ansible environment file] ****************** 2026-01-17 00:17:59.634290 | orchestrator | changed: [testbed-manager] 2026-01-17 00:17:59.634413 | orchestrator | 2026-01-17 00:17:59.634439 | orchestrator | TASK [osism.services.manager : Include netbox config tasks] ******************** 2026-01-17 00:17:59.683942 | orchestrator | skipping: [testbed-manager] 2026-01-17 00:17:59.684043 | orchestrator | 2026-01-17 00:17:59.684070 | orchestrator | TASK [osism.services.manager : Include celery config tasks] ******************** 2026-01-17 00:17:59.777918 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-celery.yml for testbed-manager 2026-01-17 00:17:59.778082 | orchestrator | 2026-01-17 00:17:59.778125 | orchestrator | TASK [osism.services.manager : Set fs.inotify.max_user_watches] **************** 2026-01-17 00:18:01.338918 | orchestrator | changed: [testbed-manager] 2026-01-17 00:18:01.339041 | orchestrator | 2026-01-17 00:18:01.339058 | orchestrator | TASK [osism.services.manager : Set fs.inotify.max_user_instances] ************** 2026-01-17 00:18:01.827223 | orchestrator | changed: [testbed-manager] 2026-01-17 00:18:01.827296 | orchestrator | 2026-01-17 00:18:01.827310 | orchestrator | TASK [osism.services.manager : Copy celery environment files] ****************** 2026-01-17 00:18:03.222838 | orchestrator | changed: [testbed-manager] => (item=conductor) 2026-01-17 00:18:03.222929 | orchestrator | changed: [testbed-manager] => (item=openstack) 2026-01-17 00:18:03.222946 | orchestrator | 2026-01-17 00:18:03.222958 | orchestrator | TASK [osism.services.manager : Copy listener environment file] ***************** 2026-01-17 00:18:03.947616 | orchestrator | changed: [testbed-manager] 2026-01-17 00:18:03.947720 | orchestrator | 2026-01-17 00:18:03.947738 | orchestrator | TASK [osism.services.manager : Check for conductor.yml] ************************ 2026-01-17 00:18:04.373350 | orchestrator | ok: [testbed-manager] 2026-01-17 00:18:04.373488 | orchestrator | 2026-01-17 00:18:04.373504 | orchestrator | TASK [osism.services.manager : Copy conductor configuration file] ************** 2026-01-17 00:18:04.769802 | orchestrator | changed: [testbed-manager] 2026-01-17 00:18:04.769919 | orchestrator | 2026-01-17 00:18:04.769946 | orchestrator | TASK [osism.services.manager : Copy empty conductor configuration file] ******** 2026-01-17 00:18:04.808190 | orchestrator | skipping: [testbed-manager] 2026-01-17 00:18:04.808282 | orchestrator | 2026-01-17 00:18:04.808298 | orchestrator | TASK [osism.services.manager : Include wrapper config tasks] ******************* 2026-01-17 00:18:04.893547 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-wrapper.yml for testbed-manager 2026-01-17 00:18:04.893726 | orchestrator | 2026-01-17 00:18:04.893753 | orchestrator | TASK [osism.services.manager : Include wrapper vars file] ********************** 2026-01-17 00:18:04.949647 | orchestrator | ok: [testbed-manager] 2026-01-17 00:18:04.949730 | orchestrator | 2026-01-17 00:18:04.949744 | orchestrator | TASK [osism.services.manager : Copy wrapper scripts] *************************** 2026-01-17 00:18:07.203943 | orchestrator | changed: [testbed-manager] => (item=osism) 2026-01-17 00:18:07.204111 | orchestrator | changed: [testbed-manager] => (item=osism-update-docker) 2026-01-17 00:18:07.204142 | orchestrator | changed: [testbed-manager] => (item=osism-update-manager) 2026-01-17 00:18:07.204177 | orchestrator | 2026-01-17 00:18:07.204268 | orchestrator | TASK [osism.services.manager : Copy cilium wrapper script] ********************* 2026-01-17 00:18:08.045791 | orchestrator | changed: [testbed-manager] 2026-01-17 00:18:08.045868 | orchestrator | 2026-01-17 00:18:08.045877 | orchestrator | TASK [osism.services.manager : Copy hubble wrapper script] ********************* 2026-01-17 00:18:08.851089 | orchestrator | changed: [testbed-manager] 2026-01-17 00:18:08.851185 | orchestrator | 2026-01-17 00:18:08.851204 | orchestrator | TASK [osism.services.manager : Copy flux wrapper script] *********************** 2026-01-17 00:18:09.593820 | orchestrator | changed: [testbed-manager] 2026-01-17 00:18:09.593924 | orchestrator | 2026-01-17 00:18:09.593945 | orchestrator | TASK [osism.services.manager : Include scripts config tasks] ******************* 2026-01-17 00:18:09.678236 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-scripts.yml for testbed-manager 2026-01-17 00:18:09.678316 | orchestrator | 2026-01-17 00:18:09.678334 | orchestrator | TASK [osism.services.manager : Include scripts vars file] ********************** 2026-01-17 00:18:09.726981 | orchestrator | ok: [testbed-manager] 2026-01-17 00:18:09.727056 | orchestrator | 2026-01-17 00:18:09.727069 | orchestrator | TASK [osism.services.manager : Copy scripts] *********************************** 2026-01-17 00:18:10.562002 | orchestrator | changed: [testbed-manager] => (item=osism-include) 2026-01-17 00:18:10.562145 | orchestrator | 2026-01-17 00:18:10.562163 | orchestrator | TASK [osism.services.manager : Include service tasks] ************************** 2026-01-17 00:18:10.664607 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/service.yml for testbed-manager 2026-01-17 00:18:10.664696 | orchestrator | 2026-01-17 00:18:10.664711 | orchestrator | TASK [osism.services.manager : Copy manager systemd unit file] ***************** 2026-01-17 00:18:11.477609 | orchestrator | changed: [testbed-manager] 2026-01-17 00:18:11.477699 | orchestrator | 2026-01-17 00:18:11.477716 | orchestrator | TASK [osism.services.manager : Create traefik external network] **************** 2026-01-17 00:18:12.128155 | orchestrator | ok: [testbed-manager] 2026-01-17 00:18:12.128254 | orchestrator | 2026-01-17 00:18:12.128271 | orchestrator | TASK [osism.services.manager : Set mariadb healthcheck for mariadb < 11.0.0] *** 2026-01-17 00:18:12.191946 | orchestrator | skipping: [testbed-manager] 2026-01-17 00:18:12.192082 | orchestrator | 2026-01-17 00:18:12.192096 | orchestrator | TASK [osism.services.manager : Set mariadb healthcheck for mariadb >= 11.0.0] *** 2026-01-17 00:18:12.256058 | orchestrator | ok: [testbed-manager] 2026-01-17 00:18:12.256197 | orchestrator | 2026-01-17 00:18:12.256214 | orchestrator | TASK [osism.services.manager : Copy docker-compose.yml file] ******************* 2026-01-17 00:18:13.164647 | orchestrator | changed: [testbed-manager] 2026-01-17 00:18:13.164738 | orchestrator | 2026-01-17 00:18:13.164754 | orchestrator | TASK [osism.services.manager : Pull container images] ************************** 2026-01-17 00:19:20.434413 | orchestrator | changed: [testbed-manager] 2026-01-17 00:19:20.434509 | orchestrator | 2026-01-17 00:19:20.434521 | orchestrator | TASK [osism.services.manager : Stop and disable old service docker-compose@manager] *** 2026-01-17 00:19:21.473871 | orchestrator | ok: [testbed-manager] 2026-01-17 00:19:21.473950 | orchestrator | 2026-01-17 00:19:21.473962 | orchestrator | TASK [osism.services.manager : Do a manual start of the manager service] ******* 2026-01-17 00:19:21.536779 | orchestrator | skipping: [testbed-manager] 2026-01-17 00:19:21.536841 | orchestrator | 2026-01-17 00:19:21.536850 | orchestrator | TASK [osism.services.manager : Manage manager service] ************************* 2026-01-17 00:19:23.888357 | orchestrator | changed: [testbed-manager] 2026-01-17 00:19:23.888437 | orchestrator | 2026-01-17 00:19:23.888485 | orchestrator | TASK [osism.services.manager : Register that manager service was started] ****** 2026-01-17 00:19:23.956551 | orchestrator | ok: [testbed-manager] 2026-01-17 00:19:23.956645 | orchestrator | 2026-01-17 00:19:23.956661 | orchestrator | TASK [osism.services.manager : Flush handlers] ********************************* 2026-01-17 00:19:23.956673 | orchestrator | 2026-01-17 00:19:23.956685 | orchestrator | RUNNING HANDLER [osism.services.manager : Restart manager service] ************* 2026-01-17 00:19:23.998938 | orchestrator | skipping: [testbed-manager] 2026-01-17 00:19:23.999042 | orchestrator | 2026-01-17 00:19:23.999057 | orchestrator | RUNNING HANDLER [osism.services.manager : Wait for manager service to start] *** 2026-01-17 00:20:24.039307 | orchestrator | Pausing for 60 seconds 2026-01-17 00:20:24.039478 | orchestrator | changed: [testbed-manager] 2026-01-17 00:20:24.039505 | orchestrator | 2026-01-17 00:20:24.039524 | orchestrator | RUNNING HANDLER [osism.services.manager : Ensure that all containers are up] *** 2026-01-17 00:20:27.615284 | orchestrator | changed: [testbed-manager] 2026-01-17 00:20:27.615419 | orchestrator | 2026-01-17 00:20:27.615438 | orchestrator | RUNNING HANDLER [osism.services.manager : Wait for an healthy manager service] *** 2026-01-17 00:21:29.728460 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (50 retries left). 2026-01-17 00:21:29.728632 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (49 retries left). 2026-01-17 00:21:29.728660 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (48 retries left). 2026-01-17 00:21:29.728679 | orchestrator | changed: [testbed-manager] 2026-01-17 00:21:29.728698 | orchestrator | 2026-01-17 00:21:29.728717 | orchestrator | RUNNING HANDLER [osism.services.manager : Copy osismclient bash completion script] *** 2026-01-17 00:21:40.685747 | orchestrator | changed: [testbed-manager] 2026-01-17 00:21:40.685917 | orchestrator | 2026-01-17 00:21:40.685933 | orchestrator | TASK [osism.services.manager : Include initialize tasks] *********************** 2026-01-17 00:21:40.766953 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/initialize.yml for testbed-manager 2026-01-17 00:21:40.767044 | orchestrator | 2026-01-17 00:21:40.767061 | orchestrator | TASK [osism.services.manager : Flush handlers] ********************************* 2026-01-17 00:21:40.767073 | orchestrator | 2026-01-17 00:21:40.767085 | orchestrator | TASK [osism.services.manager : Include vault initialize tasks] ***************** 2026-01-17 00:21:40.831008 | orchestrator | skipping: [testbed-manager] 2026-01-17 00:21:40.831122 | orchestrator | 2026-01-17 00:21:40.831137 | orchestrator | TASK [osism.services.manager : Include version verification tasks] ************* 2026-01-17 00:21:40.903493 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/verify-versions.yml for testbed-manager 2026-01-17 00:21:40.903565 | orchestrator | 2026-01-17 00:21:40.903575 | orchestrator | TASK [osism.services.manager : Deploy service manager version check script] **** 2026-01-17 00:21:41.694948 | orchestrator | changed: [testbed-manager] 2026-01-17 00:21:41.695015 | orchestrator | 2026-01-17 00:21:41.695021 | orchestrator | TASK [osism.services.manager : Execute service manager version check] ********** 2026-01-17 00:21:44.887560 | orchestrator | ok: [testbed-manager] 2026-01-17 00:21:44.887665 | orchestrator | 2026-01-17 00:21:44.887681 | orchestrator | TASK [osism.services.manager : Display version check results] ****************** 2026-01-17 00:21:44.972745 | orchestrator | ok: [testbed-manager] => { 2026-01-17 00:21:44.972847 | orchestrator | "version_check_result.stdout_lines": [ 2026-01-17 00:21:44.972866 | orchestrator | "=== OSISM Container Version Check ===", 2026-01-17 00:21:44.972879 | orchestrator | "Checking running containers against expected versions...", 2026-01-17 00:21:44.972891 | orchestrator | "", 2026-01-17 00:21:44.972903 | orchestrator | "Checking service: inventory_reconciler (Inventory Reconciler Service)", 2026-01-17 00:21:44.972914 | orchestrator | " Expected: registry.osism.tech/osism/inventory-reconciler:latest", 2026-01-17 00:21:44.972926 | orchestrator | " Enabled: true", 2026-01-17 00:21:44.972937 | orchestrator | " Running: registry.osism.tech/osism/inventory-reconciler:latest", 2026-01-17 00:21:44.972948 | orchestrator | " Status: ✅ MATCH", 2026-01-17 00:21:44.972959 | orchestrator | "", 2026-01-17 00:21:44.972970 | orchestrator | "Checking service: osism-ansible (OSISM Ansible Service)", 2026-01-17 00:21:44.972981 | orchestrator | " Expected: registry.osism.tech/osism/osism-ansible:latest", 2026-01-17 00:21:44.972992 | orchestrator | " Enabled: true", 2026-01-17 00:21:44.973003 | orchestrator | " Running: registry.osism.tech/osism/osism-ansible:latest", 2026-01-17 00:21:44.973014 | orchestrator | " Status: ✅ MATCH", 2026-01-17 00:21:44.973024 | orchestrator | "", 2026-01-17 00:21:44.973036 | orchestrator | "Checking service: osism-kubernetes (Osism-Kubernetes Service)", 2026-01-17 00:21:44.973046 | orchestrator | " Expected: registry.osism.tech/osism/osism-kubernetes:latest", 2026-01-17 00:21:44.973057 | orchestrator | " Enabled: true", 2026-01-17 00:21:44.973068 | orchestrator | " Running: registry.osism.tech/osism/osism-kubernetes:latest", 2026-01-17 00:21:44.973080 | orchestrator | " Status: ✅ MATCH", 2026-01-17 00:21:44.973091 | orchestrator | "", 2026-01-17 00:21:44.973102 | orchestrator | "Checking service: ceph-ansible (Ceph-Ansible Service)", 2026-01-17 00:21:44.973113 | orchestrator | " Expected: registry.osism.tech/osism/ceph-ansible:reef", 2026-01-17 00:21:44.973124 | orchestrator | " Enabled: true", 2026-01-17 00:21:44.973135 | orchestrator | " Running: registry.osism.tech/osism/ceph-ansible:reef", 2026-01-17 00:21:44.973170 | orchestrator | " Status: ✅ MATCH", 2026-01-17 00:21:44.973182 | orchestrator | "", 2026-01-17 00:21:44.973193 | orchestrator | "Checking service: kolla-ansible (Kolla-Ansible Service)", 2026-01-17 00:21:44.973204 | orchestrator | " Expected: registry.osism.tech/osism/kolla-ansible:2024.2", 2026-01-17 00:21:44.973214 | orchestrator | " Enabled: true", 2026-01-17 00:21:44.973225 | orchestrator | " Running: registry.osism.tech/osism/kolla-ansible:2024.2", 2026-01-17 00:21:44.973235 | orchestrator | " Status: ✅ MATCH", 2026-01-17 00:21:44.973246 | orchestrator | "", 2026-01-17 00:21:44.973257 | orchestrator | "Checking service: osismclient (OSISM Client)", 2026-01-17 00:21:44.973268 | orchestrator | " Expected: registry.osism.tech/osism/osism:latest", 2026-01-17 00:21:44.973307 | orchestrator | " Enabled: true", 2026-01-17 00:21:44.973320 | orchestrator | " Running: registry.osism.tech/osism/osism:latest", 2026-01-17 00:21:44.973332 | orchestrator | " Status: ✅ MATCH", 2026-01-17 00:21:44.973344 | orchestrator | "", 2026-01-17 00:21:44.973357 | orchestrator | "Checking service: ara-server (ARA Server)", 2026-01-17 00:21:44.973370 | orchestrator | " Expected: registry.osism.tech/osism/ara-server:1.7.3", 2026-01-17 00:21:44.973383 | orchestrator | " Enabled: true", 2026-01-17 00:21:44.973395 | orchestrator | " Running: registry.osism.tech/osism/ara-server:1.7.3", 2026-01-17 00:21:44.973408 | orchestrator | " Status: ✅ MATCH", 2026-01-17 00:21:44.973420 | orchestrator | "", 2026-01-17 00:21:44.973432 | orchestrator | "Checking service: mariadb (MariaDB for ARA)", 2026-01-17 00:21:44.973445 | orchestrator | " Expected: registry.osism.tech/dockerhub/library/mariadb:11.8.4", 2026-01-17 00:21:44.973457 | orchestrator | " Enabled: true", 2026-01-17 00:21:44.973490 | orchestrator | " Running: registry.osism.tech/dockerhub/library/mariadb:11.8.4", 2026-01-17 00:21:44.973508 | orchestrator | " Status: ✅ MATCH", 2026-01-17 00:21:44.973520 | orchestrator | "", 2026-01-17 00:21:44.973533 | orchestrator | "Checking service: frontend (OSISM Frontend)", 2026-01-17 00:21:44.973546 | orchestrator | " Expected: registry.osism.tech/osism/osism-frontend:latest", 2026-01-17 00:21:44.973558 | orchestrator | " Enabled: true", 2026-01-17 00:21:44.973571 | orchestrator | " Running: registry.osism.tech/osism/osism-frontend:latest", 2026-01-17 00:21:44.973583 | orchestrator | " Status: ✅ MATCH", 2026-01-17 00:21:44.973595 | orchestrator | "", 2026-01-17 00:21:44.973607 | orchestrator | "Checking service: redis (Redis Cache)", 2026-01-17 00:21:44.973627 | orchestrator | " Expected: registry.osism.tech/dockerhub/library/redis:7.4.7-alpine", 2026-01-17 00:21:44.973647 | orchestrator | " Enabled: true", 2026-01-17 00:21:44.973667 | orchestrator | " Running: registry.osism.tech/dockerhub/library/redis:7.4.7-alpine", 2026-01-17 00:21:44.973688 | orchestrator | " Status: ✅ MATCH", 2026-01-17 00:21:44.973702 | orchestrator | "", 2026-01-17 00:21:44.973712 | orchestrator | "Checking service: api (OSISM API Service)", 2026-01-17 00:21:44.973723 | orchestrator | " Expected: registry.osism.tech/osism/osism:latest", 2026-01-17 00:21:44.973734 | orchestrator | " Enabled: true", 2026-01-17 00:21:44.973745 | orchestrator | " Running: registry.osism.tech/osism/osism:latest", 2026-01-17 00:21:44.973756 | orchestrator | " Status: ✅ MATCH", 2026-01-17 00:21:44.973766 | orchestrator | "", 2026-01-17 00:21:44.973777 | orchestrator | "Checking service: listener (OpenStack Event Listener)", 2026-01-17 00:21:44.973788 | orchestrator | " Expected: registry.osism.tech/osism/osism:latest", 2026-01-17 00:21:44.973798 | orchestrator | " Enabled: true", 2026-01-17 00:21:44.973809 | orchestrator | " Running: registry.osism.tech/osism/osism:latest", 2026-01-17 00:21:44.973820 | orchestrator | " Status: ✅ MATCH", 2026-01-17 00:21:44.973830 | orchestrator | "", 2026-01-17 00:21:44.973841 | orchestrator | "Checking service: openstack (OpenStack Integration)", 2026-01-17 00:21:44.973852 | orchestrator | " Expected: registry.osism.tech/osism/osism:latest", 2026-01-17 00:21:44.973863 | orchestrator | " Enabled: true", 2026-01-17 00:21:44.973873 | orchestrator | " Running: registry.osism.tech/osism/osism:latest", 2026-01-17 00:21:44.973884 | orchestrator | " Status: ✅ MATCH", 2026-01-17 00:21:44.973895 | orchestrator | "", 2026-01-17 00:21:44.973905 | orchestrator | "Checking service: beat (Celery Beat Scheduler)", 2026-01-17 00:21:44.973933 | orchestrator | " Expected: registry.osism.tech/osism/osism:latest", 2026-01-17 00:21:44.973944 | orchestrator | " Enabled: true", 2026-01-17 00:21:44.973955 | orchestrator | " Running: registry.osism.tech/osism/osism:latest", 2026-01-17 00:21:44.973966 | orchestrator | " Status: ✅ MATCH", 2026-01-17 00:21:44.973976 | orchestrator | "", 2026-01-17 00:21:44.973987 | orchestrator | "Checking service: flower (Celery Flower Monitor)", 2026-01-17 00:21:44.974055 | orchestrator | " Expected: registry.osism.tech/osism/osism:latest", 2026-01-17 00:21:44.974070 | orchestrator | " Enabled: true", 2026-01-17 00:21:44.974081 | orchestrator | " Running: registry.osism.tech/osism/osism:latest", 2026-01-17 00:21:44.974091 | orchestrator | " Status: ✅ MATCH", 2026-01-17 00:21:44.974102 | orchestrator | "", 2026-01-17 00:21:44.974113 | orchestrator | "=== Summary ===", 2026-01-17 00:21:44.974124 | orchestrator | "Errors (version mismatches): 0", 2026-01-17 00:21:44.974134 | orchestrator | "Warnings (expected containers not running): 0", 2026-01-17 00:21:44.974145 | orchestrator | "", 2026-01-17 00:21:44.974155 | orchestrator | "✅ All running containers match expected versions!" 2026-01-17 00:21:44.974166 | orchestrator | ] 2026-01-17 00:21:44.974177 | orchestrator | } 2026-01-17 00:21:44.974189 | orchestrator | 2026-01-17 00:21:44.974199 | orchestrator | TASK [osism.services.manager : Skip version check due to service configuration] *** 2026-01-17 00:21:45.026595 | orchestrator | skipping: [testbed-manager] 2026-01-17 00:21:45.026688 | orchestrator | 2026-01-17 00:21:45.026704 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-17 00:21:45.026716 | orchestrator | testbed-manager : ok=70 changed=37 unreachable=0 failed=0 skipped=13 rescued=0 ignored=0 2026-01-17 00:21:45.026728 | orchestrator | 2026-01-17 00:21:45.135541 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2026-01-17 00:21:45.135641 | orchestrator | + deactivate 2026-01-17 00:21:45.135661 | orchestrator | + '[' -n /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin ']' 2026-01-17 00:21:45.135677 | orchestrator | + PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-01-17 00:21:45.135689 | orchestrator | + export PATH 2026-01-17 00:21:45.135699 | orchestrator | + unset _OLD_VIRTUAL_PATH 2026-01-17 00:21:45.135713 | orchestrator | + '[' -n '' ']' 2026-01-17 00:21:45.135724 | orchestrator | + hash -r 2026-01-17 00:21:45.135734 | orchestrator | + '[' -n '' ']' 2026-01-17 00:21:45.135746 | orchestrator | + unset VIRTUAL_ENV 2026-01-17 00:21:45.135759 | orchestrator | + unset VIRTUAL_ENV_PROMPT 2026-01-17 00:21:45.135771 | orchestrator | + '[' '!' '' = nondestructive ']' 2026-01-17 00:21:45.135783 | orchestrator | + unset -f deactivate 2026-01-17 00:21:45.135796 | orchestrator | + cp /home/dragon/.ssh/id_rsa.pub /opt/ansible/secrets/id_rsa.operator.pub 2026-01-17 00:21:45.146003 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2026-01-17 00:21:45.146182 | orchestrator | + wait_for_container_healthy 60 ceph-ansible 2026-01-17 00:21:45.146201 | orchestrator | + local max_attempts=60 2026-01-17 00:21:45.146213 | orchestrator | + local name=ceph-ansible 2026-01-17 00:21:45.146225 | orchestrator | + local attempt_num=1 2026-01-17 00:21:45.146530 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-01-17 00:21:45.189023 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-01-17 00:21:45.189110 | orchestrator | + wait_for_container_healthy 60 kolla-ansible 2026-01-17 00:21:45.189124 | orchestrator | + local max_attempts=60 2026-01-17 00:21:45.189135 | orchestrator | + local name=kolla-ansible 2026-01-17 00:21:45.189145 | orchestrator | + local attempt_num=1 2026-01-17 00:21:45.189401 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' kolla-ansible 2026-01-17 00:21:45.234556 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-01-17 00:21:45.234638 | orchestrator | + wait_for_container_healthy 60 osism-ansible 2026-01-17 00:21:45.234652 | orchestrator | + local max_attempts=60 2026-01-17 00:21:45.234664 | orchestrator | + local name=osism-ansible 2026-01-17 00:21:45.234674 | orchestrator | + local attempt_num=1 2026-01-17 00:21:45.235778 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' osism-ansible 2026-01-17 00:21:45.272109 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-01-17 00:21:45.272196 | orchestrator | + [[ true == \t\r\u\e ]] 2026-01-17 00:21:45.272210 | orchestrator | + sh -c /opt/configuration/scripts/disable-ara.sh 2026-01-17 00:21:46.020915 | orchestrator | + docker compose --project-directory /opt/manager ps 2026-01-17 00:21:46.183996 | orchestrator | NAME IMAGE COMMAND SERVICE CREATED STATUS PORTS 2026-01-17 00:21:46.490064 | orchestrator | ceph-ansible registry.osism.tech/osism/ceph-ansible:reef "/entrypoint.sh osis…" ceph-ansible 2 minutes ago Up About a minute (healthy) 2026-01-17 00:21:46.490150 | orchestrator | kolla-ansible registry.osism.tech/osism/kolla-ansible:2024.2 "/entrypoint.sh osis…" kolla-ansible 2 minutes ago Up About a minute (healthy) 2026-01-17 00:21:46.490165 | orchestrator | manager-api-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" api 2 minutes ago Up 2 minutes (healthy) 192.168.16.5:8000->8000/tcp 2026-01-17 00:21:46.490181 | orchestrator | manager-ara-server-1 registry.osism.tech/osism/ara-server:1.7.3 "sh -c '/wait && /ru…" ara-server 2 minutes ago Up 2 minutes (healthy) 8000/tcp 2026-01-17 00:21:46.490193 | orchestrator | manager-beat-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" beat 2 minutes ago Up 2 minutes (healthy) 2026-01-17 00:21:46.490204 | orchestrator | manager-flower-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" flower 2 minutes ago Up 2 minutes (healthy) 2026-01-17 00:21:46.490215 | orchestrator | manager-inventory_reconciler-1 registry.osism.tech/osism/inventory-reconciler:latest "/sbin/tini -- /entr…" inventory_reconciler 2 minutes ago Up About a minute (healthy) 2026-01-17 00:21:46.490248 | orchestrator | manager-listener-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" listener 2 minutes ago Up 2 minutes (healthy) 2026-01-17 00:21:46.490260 | orchestrator | manager-mariadb-1 registry.osism.tech/dockerhub/library/mariadb:11.8.4 "docker-entrypoint.s…" mariadb 2 minutes ago Up 2 minutes (healthy) 3306/tcp 2026-01-17 00:21:46.490271 | orchestrator | manager-openstack-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" openstack 2 minutes ago Up 2 minutes (healthy) 2026-01-17 00:21:46.490318 | orchestrator | manager-redis-1 registry.osism.tech/dockerhub/library/redis:7.4.7-alpine "docker-entrypoint.s…" redis 2 minutes ago Up 2 minutes (healthy) 6379/tcp 2026-01-17 00:21:46.490330 | orchestrator | osism-ansible registry.osism.tech/osism/osism-ansible:latest "/entrypoint.sh osis…" osism-ansible 2 minutes ago Up About a minute (healthy) 2026-01-17 00:21:46.490368 | orchestrator | osism-frontend registry.osism.tech/osism/osism-frontend:latest "docker-entrypoint.s…" frontend 2 minutes ago Up 2 minutes 192.168.16.5:3000->3000/tcp 2026-01-17 00:21:46.490380 | orchestrator | osism-kubernetes registry.osism.tech/osism/osism-kubernetes:latest "/entrypoint.sh osis…" osism-kubernetes 2 minutes ago Up About a minute (healthy) 2026-01-17 00:21:46.490391 | orchestrator | osismclient registry.osism.tech/osism/osism:latest "/sbin/tini -- sleep…" osismclient 2 minutes ago Up 2 minutes (healthy) 2026-01-17 00:21:46.490403 | orchestrator | ++ semver latest 7.0.0 2026-01-17 00:21:46.490416 | orchestrator | + [[ -1 -ge 0 ]] 2026-01-17 00:21:46.490427 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2026-01-17 00:21:46.490439 | orchestrator | + sed -i s/community.general.yaml/osism.commons.still_alive/ /opt/configuration/environments/ansible.cfg 2026-01-17 00:21:46.490451 | orchestrator | + osism apply resolvconf -l testbed-manager 2026-01-17 00:21:58.584184 | orchestrator | 2026-01-17 00:21:58 | INFO  | Task 52389f22-397d-4377-8c57-55e64f52273f (resolvconf) was prepared for execution. 2026-01-17 00:21:58.584357 | orchestrator | 2026-01-17 00:21:58 | INFO  | It takes a moment until task 52389f22-397d-4377-8c57-55e64f52273f (resolvconf) has been started and output is visible here. 2026-01-17 00:22:13.072893 | orchestrator | 2026-01-17 00:22:13.073009 | orchestrator | PLAY [Apply role resolvconf] *************************************************** 2026-01-17 00:22:13.073026 | orchestrator | 2026-01-17 00:22:13.073038 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-01-17 00:22:13.073049 | orchestrator | Saturday 17 January 2026 00:22:02 +0000 (0:00:00.143) 0:00:00.143 ****** 2026-01-17 00:22:13.073060 | orchestrator | ok: [testbed-manager] 2026-01-17 00:22:13.073072 | orchestrator | 2026-01-17 00:22:13.073083 | orchestrator | TASK [osism.commons.resolvconf : Check minimum and maximum number of name servers] *** 2026-01-17 00:22:13.073095 | orchestrator | Saturday 17 January 2026 00:22:06 +0000 (0:00:03.886) 0:00:04.030 ****** 2026-01-17 00:22:13.073106 | orchestrator | skipping: [testbed-manager] 2026-01-17 00:22:13.073117 | orchestrator | 2026-01-17 00:22:13.073128 | orchestrator | TASK [osism.commons.resolvconf : Include resolvconf tasks] ********************* 2026-01-17 00:22:13.073138 | orchestrator | Saturday 17 January 2026 00:22:06 +0000 (0:00:00.056) 0:00:04.086 ****** 2026-01-17 00:22:13.073149 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-resolv.yml for testbed-manager 2026-01-17 00:22:13.073161 | orchestrator | 2026-01-17 00:22:13.073172 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific installation tasks] *** 2026-01-17 00:22:13.073183 | orchestrator | Saturday 17 January 2026 00:22:06 +0000 (0:00:00.085) 0:00:04.172 ****** 2026-01-17 00:22:13.073194 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/install-Debian-family.yml for testbed-manager 2026-01-17 00:22:13.073204 | orchestrator | 2026-01-17 00:22:13.073215 | orchestrator | TASK [osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf] *** 2026-01-17 00:22:13.073236 | orchestrator | Saturday 17 January 2026 00:22:06 +0000 (0:00:00.084) 0:00:04.256 ****** 2026-01-17 00:22:13.073290 | orchestrator | ok: [testbed-manager] 2026-01-17 00:22:13.073302 | orchestrator | 2026-01-17 00:22:13.073313 | orchestrator | TASK [osism.commons.resolvconf : Install package systemd-resolved] ************* 2026-01-17 00:22:13.073324 | orchestrator | Saturday 17 January 2026 00:22:08 +0000 (0:00:01.289) 0:00:05.546 ****** 2026-01-17 00:22:13.073335 | orchestrator | skipping: [testbed-manager] 2026-01-17 00:22:13.073345 | orchestrator | 2026-01-17 00:22:13.073356 | orchestrator | TASK [osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf] ***** 2026-01-17 00:22:13.073367 | orchestrator | Saturday 17 January 2026 00:22:08 +0000 (0:00:00.064) 0:00:05.611 ****** 2026-01-17 00:22:13.073377 | orchestrator | ok: [testbed-manager] 2026-01-17 00:22:13.073388 | orchestrator | 2026-01-17 00:22:13.073399 | orchestrator | TASK [osism.commons.resolvconf : Archive existing file /etc/resolv.conf] ******* 2026-01-17 00:22:13.073409 | orchestrator | Saturday 17 January 2026 00:22:08 +0000 (0:00:00.521) 0:00:06.133 ****** 2026-01-17 00:22:13.073420 | orchestrator | skipping: [testbed-manager] 2026-01-17 00:22:13.073431 | orchestrator | 2026-01-17 00:22:13.073442 | orchestrator | TASK [osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf] *** 2026-01-17 00:22:13.073454 | orchestrator | Saturday 17 January 2026 00:22:08 +0000 (0:00:00.084) 0:00:06.218 ****** 2026-01-17 00:22:13.073465 | orchestrator | changed: [testbed-manager] 2026-01-17 00:22:13.073475 | orchestrator | 2026-01-17 00:22:13.073486 | orchestrator | TASK [osism.commons.resolvconf : Copy configuration files] ********************* 2026-01-17 00:22:13.073497 | orchestrator | Saturday 17 January 2026 00:22:09 +0000 (0:00:00.556) 0:00:06.774 ****** 2026-01-17 00:22:13.073507 | orchestrator | changed: [testbed-manager] 2026-01-17 00:22:13.073518 | orchestrator | 2026-01-17 00:22:13.073529 | orchestrator | TASK [osism.commons.resolvconf : Start/enable systemd-resolved service] ******** 2026-01-17 00:22:13.073539 | orchestrator | Saturday 17 January 2026 00:22:10 +0000 (0:00:01.164) 0:00:07.939 ****** 2026-01-17 00:22:13.073575 | orchestrator | ok: [testbed-manager] 2026-01-17 00:22:13.073586 | orchestrator | 2026-01-17 00:22:13.073597 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific configuration tasks] *** 2026-01-17 00:22:13.073608 | orchestrator | Saturday 17 January 2026 00:22:11 +0000 (0:00:00.999) 0:00:08.938 ****** 2026-01-17 00:22:13.073619 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-Debian-family.yml for testbed-manager 2026-01-17 00:22:13.073630 | orchestrator | 2026-01-17 00:22:13.073641 | orchestrator | TASK [osism.commons.resolvconf : Restart systemd-resolved service] ************* 2026-01-17 00:22:13.073651 | orchestrator | Saturday 17 January 2026 00:22:11 +0000 (0:00:00.085) 0:00:09.024 ****** 2026-01-17 00:22:13.073663 | orchestrator | changed: [testbed-manager] 2026-01-17 00:22:13.073674 | orchestrator | 2026-01-17 00:22:13.073692 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-17 00:22:13.073712 | orchestrator | testbed-manager : ok=10  changed=3  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-01-17 00:22:13.073731 | orchestrator | 2026-01-17 00:22:13.073751 | orchestrator | 2026-01-17 00:22:13.073771 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-17 00:22:13.073786 | orchestrator | Saturday 17 January 2026 00:22:12 +0000 (0:00:01.177) 0:00:10.201 ****** 2026-01-17 00:22:13.073796 | orchestrator | =============================================================================== 2026-01-17 00:22:13.073807 | orchestrator | Gathering Facts --------------------------------------------------------- 3.89s 2026-01-17 00:22:13.073817 | orchestrator | osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf --- 1.29s 2026-01-17 00:22:13.073828 | orchestrator | osism.commons.resolvconf : Restart systemd-resolved service ------------- 1.18s 2026-01-17 00:22:13.073838 | orchestrator | osism.commons.resolvconf : Copy configuration files --------------------- 1.16s 2026-01-17 00:22:13.073849 | orchestrator | osism.commons.resolvconf : Start/enable systemd-resolved service -------- 1.00s 2026-01-17 00:22:13.073859 | orchestrator | osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf --- 0.56s 2026-01-17 00:22:13.073889 | orchestrator | osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf ----- 0.52s 2026-01-17 00:22:13.073900 | orchestrator | osism.commons.resolvconf : Include distribution specific configuration tasks --- 0.09s 2026-01-17 00:22:13.073911 | orchestrator | osism.commons.resolvconf : Include resolvconf tasks --------------------- 0.09s 2026-01-17 00:22:13.073922 | orchestrator | osism.commons.resolvconf : Archive existing file /etc/resolv.conf ------- 0.08s 2026-01-17 00:22:13.073932 | orchestrator | osism.commons.resolvconf : Include distribution specific installation tasks --- 0.08s 2026-01-17 00:22:13.073943 | orchestrator | osism.commons.resolvconf : Install package systemd-resolved ------------- 0.06s 2026-01-17 00:22:13.073953 | orchestrator | osism.commons.resolvconf : Check minimum and maximum number of name servers --- 0.06s 2026-01-17 00:22:13.397444 | orchestrator | + osism apply sshconfig 2026-01-17 00:22:25.631480 | orchestrator | 2026-01-17 00:22:25 | INFO  | Task 16c4714d-bc0e-4037-bacf-8e166eabeede (sshconfig) was prepared for execution. 2026-01-17 00:22:25.631608 | orchestrator | 2026-01-17 00:22:25 | INFO  | It takes a moment until task 16c4714d-bc0e-4037-bacf-8e166eabeede (sshconfig) has been started and output is visible here. 2026-01-17 00:22:37.776831 | orchestrator | 2026-01-17 00:22:37.776953 | orchestrator | PLAY [Apply role sshconfig] **************************************************** 2026-01-17 00:22:37.776969 | orchestrator | 2026-01-17 00:22:37.776980 | orchestrator | TASK [osism.commons.sshconfig : Get home directory of operator user] *********** 2026-01-17 00:22:37.776991 | orchestrator | Saturday 17 January 2026 00:22:29 +0000 (0:00:00.165) 0:00:00.165 ****** 2026-01-17 00:22:37.777001 | orchestrator | ok: [testbed-manager] 2026-01-17 00:22:37.777012 | orchestrator | 2026-01-17 00:22:37.777022 | orchestrator | TASK [osism.commons.sshconfig : Ensure .ssh/config.d exist] ******************** 2026-01-17 00:22:37.777031 | orchestrator | Saturday 17 January 2026 00:22:30 +0000 (0:00:00.546) 0:00:00.711 ****** 2026-01-17 00:22:37.777080 | orchestrator | changed: [testbed-manager] 2026-01-17 00:22:37.777091 | orchestrator | 2026-01-17 00:22:37.777101 | orchestrator | TASK [osism.commons.sshconfig : Ensure config for each host exist] ************* 2026-01-17 00:22:37.777111 | orchestrator | Saturday 17 January 2026 00:22:31 +0000 (0:00:00.517) 0:00:01.229 ****** 2026-01-17 00:22:37.777120 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0) 2026-01-17 00:22:37.777131 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1) 2026-01-17 00:22:37.777140 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2) 2026-01-17 00:22:37.777150 | orchestrator | changed: [testbed-manager] => (item=testbed-manager) 2026-01-17 00:22:37.777160 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3) 2026-01-17 00:22:37.777169 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4) 2026-01-17 00:22:37.777178 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5) 2026-01-17 00:22:37.777188 | orchestrator | 2026-01-17 00:22:37.777197 | orchestrator | TASK [osism.commons.sshconfig : Add extra config] ****************************** 2026-01-17 00:22:37.777207 | orchestrator | Saturday 17 January 2026 00:22:36 +0000 (0:00:05.821) 0:00:07.050 ****** 2026-01-17 00:22:37.777259 | orchestrator | skipping: [testbed-manager] 2026-01-17 00:22:37.777270 | orchestrator | 2026-01-17 00:22:37.777280 | orchestrator | TASK [osism.commons.sshconfig : Assemble ssh config] *************************** 2026-01-17 00:22:37.777290 | orchestrator | Saturday 17 January 2026 00:22:36 +0000 (0:00:00.092) 0:00:07.143 ****** 2026-01-17 00:22:37.777299 | orchestrator | changed: [testbed-manager] 2026-01-17 00:22:37.777309 | orchestrator | 2026-01-17 00:22:37.777318 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-17 00:22:37.777329 | orchestrator | testbed-manager : ok=4  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-01-17 00:22:37.777339 | orchestrator | 2026-01-17 00:22:37.777349 | orchestrator | 2026-01-17 00:22:37.777358 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-17 00:22:37.777368 | orchestrator | Saturday 17 January 2026 00:22:37 +0000 (0:00:00.592) 0:00:07.735 ****** 2026-01-17 00:22:37.777377 | orchestrator | =============================================================================== 2026-01-17 00:22:37.777387 | orchestrator | osism.commons.sshconfig : Ensure config for each host exist ------------- 5.82s 2026-01-17 00:22:37.777396 | orchestrator | osism.commons.sshconfig : Assemble ssh config --------------------------- 0.59s 2026-01-17 00:22:37.777406 | orchestrator | osism.commons.sshconfig : Get home directory of operator user ----------- 0.55s 2026-01-17 00:22:37.777415 | orchestrator | osism.commons.sshconfig : Ensure .ssh/config.d exist -------------------- 0.52s 2026-01-17 00:22:37.777425 | orchestrator | osism.commons.sshconfig : Add extra config ------------------------------ 0.09s 2026-01-17 00:22:38.100681 | orchestrator | + osism apply known-hosts 2026-01-17 00:22:50.125431 | orchestrator | 2026-01-17 00:22:50 | INFO  | Task ffb715a8-999d-4734-9517-bfbec4d20d3a (known-hosts) was prepared for execution. 2026-01-17 00:22:50.125527 | orchestrator | 2026-01-17 00:22:50 | INFO  | It takes a moment until task ffb715a8-999d-4734-9517-bfbec4d20d3a (known-hosts) has been started and output is visible here. 2026-01-17 00:23:07.253269 | orchestrator | 2026-01-17 00:23:07.253391 | orchestrator | PLAY [Apply role known_hosts] ************************************************** 2026-01-17 00:23:07.253401 | orchestrator | 2026-01-17 00:23:07.253406 | orchestrator | TASK [osism.commons.known_hosts : Run ssh-keyscan for all hosts with hostname] *** 2026-01-17 00:23:07.253412 | orchestrator | Saturday 17 January 2026 00:22:54 +0000 (0:00:00.201) 0:00:00.201 ****** 2026-01-17 00:23:07.253417 | orchestrator | ok: [testbed-manager] => (item=testbed-manager) 2026-01-17 00:23:07.253422 | orchestrator | ok: [testbed-manager] => (item=testbed-node-3) 2026-01-17 00:23:07.253426 | orchestrator | ok: [testbed-manager] => (item=testbed-node-4) 2026-01-17 00:23:07.253430 | orchestrator | ok: [testbed-manager] => (item=testbed-node-5) 2026-01-17 00:23:07.253445 | orchestrator | ok: [testbed-manager] => (item=testbed-node-0) 2026-01-17 00:23:07.253450 | orchestrator | ok: [testbed-manager] => (item=testbed-node-1) 2026-01-17 00:23:07.253453 | orchestrator | ok: [testbed-manager] => (item=testbed-node-2) 2026-01-17 00:23:07.253457 | orchestrator | 2026-01-17 00:23:07.253461 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with hostname] *** 2026-01-17 00:23:07.253466 | orchestrator | Saturday 17 January 2026 00:23:00 +0000 (0:00:06.052) 0:00:06.254 ****** 2026-01-17 00:23:07.253472 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-manager) 2026-01-17 00:23:07.253477 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-3) 2026-01-17 00:23:07.253488 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-4) 2026-01-17 00:23:07.253491 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-5) 2026-01-17 00:23:07.253495 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-0) 2026-01-17 00:23:07.253499 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-1) 2026-01-17 00:23:07.253503 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-2) 2026-01-17 00:23:07.253506 | orchestrator | 2026-01-17 00:23:07.253510 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-01-17 00:23:07.253514 | orchestrator | Saturday 17 January 2026 00:23:00 +0000 (0:00:00.180) 0:00:06.434 ****** 2026-01-17 00:23:07.253518 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBHSCC7UJ4xhhMEwMA1FGC3JpyjMm+WnRJmql/AY5tlgXpo5VD8CrKdn8UYLtZ40QhcoV7sX45WF0CiiMzC0Whtk=) 2026-01-17 00:23:07.253526 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDKbvzONkiiGUtKsSUVs4rqmbXIywahtDMmOXxwss4u3WnoP3TVYdoN01JE+BbW2EDDq8uJ9GabZ7oHAlP0U6q2OxB6CsBJC8giSMIJvkE2xLw3GSn9s48T+g6amUIwxBIMjF1nxIinOENLrKDyh3MDvGqcJ8VL10bzcx8g3lcGFwQ+Nlmsc0VRsku9Kipf35oyb07hUfgOPnI58DhgWgBnrjaxsakQCA8fJfQSuV72QyHG5++5V+AAss1PgYbQKQ27UD96RPSQmF323EpQY/pQFTnLLI5T6l2Hkg17e9f7Ys1sicJtIj9QyZ2nPwaU1EkdTo77Q9RnOPTRvnCMF/SZdE4WfVWHvxrdUz5SQ6Zlwe7US6rfa2xXAjFFTcbAPIuFOQqe5QpqaqgtzL9+EFBUwVQXsSir9okzReNyQuu5vX+4JpgQqW/AuJjGiq+0wA7cLyUOfYvoEZfvoGptHDg/+l4LGHWhOR+Mbsy1f5r7RGrfBjKA+XK5aQIi4vGuBKk=) 2026-01-17 00:23:07.253534 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIPk+T6TsMAp3nYvbpDXhnhNSSLGuR5KCQZRpTcA/+BoC) 2026-01-17 00:23:07.253540 | orchestrator | 2026-01-17 00:23:07.253544 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-01-17 00:23:07.253547 | orchestrator | Saturday 17 January 2026 00:23:01 +0000 (0:00:01.179) 0:00:07.613 ****** 2026-01-17 00:23:07.253551 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBI4B0ngBwmHGyBuMbqVsgJnLSwDMQKC8PXhZQsDhP7jTIBk4R5iXW/lXUNRKIHrFle2aHg98IA+jid9V8XidwN0=) 2026-01-17 00:23:07.253571 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDIcIduz4/LGoe+oxezUscw3Px4K/nclaea1g0OFjHYiv1sq5zbRnPNXNS4C77hcmMjyKXc2ld86ywX3kFHF6Q/keEJJNsin9VRlUAsbMmxaLEymq6qmWMvC/XTBvYTgHVKi+wPjC1l9kLe6Imu97/VvcIJaaXIWfcEMo6cXxL2XmNjedrTmWpcncE5Lv57Z4QIILvTdyZv+8GCPVG2U13D6aOKbuFYHNOsZENLkK58rpqqGdsY4sa5ufviP0Bt8xXaXC9s9XoOGl2/D8a2JUCX2XMUgTxMcLA0M7jye1/LKyj7Fsjd7MXzRqO3Zwp4m9DQ5NkKPOorU/pvUn3e3JwP5AkxNU3yxnfSezhMYzMAg5vmv+mKVYR58QuS4ub8hhCEfjOMr8wf/35XOdpCLPWs+7wWyWfifTTn+Q0l3jdQ8iTnOs2xLLOMPV+INddRBEEe3PMKnE8NhOh8EOIaPge+SFDfTbvZrMYJEgFpN1Nrl9DaBeba1wn1es2qWfReT8E=) 2026-01-17 00:23:07.253579 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIEnG9PdznQN9c6RohH9blDjz7QbNyHvLOQfX3NiCOm15) 2026-01-17 00:23:07.253583 | orchestrator | 2026-01-17 00:23:07.253587 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-01-17 00:23:07.253591 | orchestrator | Saturday 17 January 2026 00:23:02 +0000 (0:00:01.058) 0:00:08.672 ****** 2026-01-17 00:23:07.253594 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCtPxvo4ipyJcLs73nsByH5S855a31BWsi7DlX0hiEE65jyTaDkTZI0moZLEwz5wphxw4a/1DpgRSYYmZwDcDXWyjWjjM1HWkjC/kbVZbCpniVCW6ueM+T6aEbNm2gA4TZbtY9d4mfORGYQ5vQSiVXfoP0AwdXi+CTjGpD+Tqswx5iFfPqk7P/7QcBlNxg5wNabjBnNFiOij0gQBeAfQ5cZrozn/l1Wyahl2jDL17/hb8p1OCJnavLVXzA1BYLtnIUC7tDqywJqaSgttKPStONTzd9wwJ3HOdWmcuLZWUa4vsTKzUPs2n5O8GmDTKAIS2AeiR6NnFje8gaBBeUDNhJ/cZt8exl/gXsn3OBFbJFSZaWsqdeQEwYVbw97tWRy1xEO0ufkKfl3SmiMJ+qOnJnvDD9ugDJ5dnYw+Sqp+FrU57he7zdNe72H/PBB0C8+NgDTuYoulBuTC8TkA73Nm42cI5/o7mZjvI+egi88nOQMu2035s6hSBfjC5Zf4t0xMVs=) 2026-01-17 00:23:07.253599 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBDGzt+GyD+Jr3bmV25yncEOFqy5rO3zV3EphWjhp7biEeRCTUy0gg2iAAAtta9Isa6xQ2x+kNJDazvXhTg2V4pU=) 2026-01-17 00:23:07.253602 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAINWDWaku6jZj+GP5VuhwGAJIvDrRC3tQpdRK2/5AA8Zs) 2026-01-17 00:23:07.253606 | orchestrator | 2026-01-17 00:23:07.253610 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-01-17 00:23:07.253614 | orchestrator | Saturday 17 January 2026 00:23:03 +0000 (0:00:01.092) 0:00:09.765 ****** 2026-01-17 00:23:07.253617 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIAWEcEzwCjzLjyWZgr+nftUyKHqMaIgYdplg8IyrGl3B) 2026-01-17 00:23:07.253653 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDFHobcmM4FCOnt8CvZLgfd/ax0zR43IsUg1HXwfSbb69B3Q7Wja8rd6ROtz+bmVxV9gHsWfwjydLX7QreQjZn5lJCs1T4kS30xaWVQUtlRXFOK20Lo4pjAUyKq2X/OnEBF8yd1R+PvM/1EXI/Rupa0RbjmtU7WgvnW18GGxcRLb5CD/BXol561hjXAWBaUl4vg/3dHtt/vX7CiPQzCw1IsyxV/nDtU5HtaRqN5XkcclkNiEGG9Eo7PtPyqntIA7PKnErJmmmGUl+tvnTGpdQHFAGeiY91QrWXIycDCwH9K6HlX/CS8eAxBHSx6gY7THyHLs/JhuFP8zXH42o6o/yheMlvBwxPRh3ctr6KyVoKKgOqzvPTVIGIdyqxDN0/Sr/dDF0KUA9hgt9pOwvxT6knl6G1rDSMhlSYsECutj2ZMQO6ur8a6tU6xeMnq8qPQHbC2ENNyeg2lllL3Nzt/D2xhtdecwuc8NlY1F4GHGcnBFxSsIBRnUTadnNEQhXNmY4M=) 2026-01-17 00:23:07.253658 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBEKUdrgpcPTqC3BimTcvNd8evyBrUmetgkqAZRjPrdUlIY0NQfuk5vr4Md7VFdjrGEu3mCkRVnpv7OjporntV4k=) 2026-01-17 00:23:07.253662 | orchestrator | 2026-01-17 00:23:07.253665 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-01-17 00:23:07.253669 | orchestrator | Saturday 17 January 2026 00:23:05 +0000 (0:00:01.112) 0:00:10.877 ****** 2026-01-17 00:23:07.253673 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCvxXdD4YH76KHbOgSUglOZ2+jJeBGhOtNSysoIEjQN1kE3wms9Ly1Ie0dT+p+YuCO9bMAbrVqbcjP/SBvpAkFCKVdjsCIiQbOjU72fUlSoCPhz3Pwqpq+3oX79bbJfUpYIwINK2/taCR5iVv6iMfQiSbnrXxZrgiAfrwhfNtmLhvHw9SEX611Cr8WmNb/pDalT9bZGRukUayZoqZ3gV5IEdzY9+PyQ0cd01i5EHMIZUV6QofJvNBc2sKredlYJystbD6Si7FID2NVjTdNc3iTZz0jDHPDbS/blQKgOTpBbqjr2498Up4BLp7hz+R+Q7eGVRw378Ft2Kubgr3RfYWD2h4NXAnR02YQ77Z/zBkZrdN7l5Cs8+uxuL48VRoFKCY9/MBBJpxnQHrmxe51vAOiNwj3gjcNDjAMmAu1Co32QcCxvH1F9XwBCgaUVekJFSWOfZhhg0Xof6dYdUxTuHOsivhRB3I2yoDUgf27hsAt9XSybS1N0C+JPfIPd34uAvBU=) 2026-01-17 00:23:07.253680 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBMYLZz00M55GuOk0xy00G/GPVi0nC4sE0a5DNxKvxCIXWAgth648MXTJmbm7N/H21qp5diJE/4AEUR+VUgdWtO4=) 2026-01-17 00:23:07.253684 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIKU6esRQ/z+EOHyVxgQApSimmMHqHLMXfjljGV2W0r/2) 2026-01-17 00:23:07.253688 | orchestrator | 2026-01-17 00:23:07.253692 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-01-17 00:23:07.253695 | orchestrator | Saturday 17 January 2026 00:23:06 +0000 (0:00:01.096) 0:00:11.974 ****** 2026-01-17 00:23:07.253702 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBEZ325QaTWVPei+zJVTZ/jGjdRZN6RBAYsOoZKSutiRK2YRu30REbLApDNsj4N1ZIpQa2p7Q4MONWNJ9Wuwx6pQ=) 2026-01-17 00:23:18.275863 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCyPK7L3ZgVZSH1q9YwtpfXp1PUdc5moUbtCG/iIr77WoKwIvlJ6DKfvmKr3XtRZ63vmIRKVKT0t7XwHDBNVaTSF+164zI84aj5e9AmWrP8qVipNj/9UwGdGPVom/IYgKzX/wtPVVEgXZ+k3UzWHfDkLbRk/CkSU3cYRexMmKCJiPHco7OxqzEK/NBtRnERzjeiEmGblKUjVNO/zo2MmlLL6uhoY1mpwSCX3gZwy7XDeO3gFO3byV8rw74kEf2gqWwYfy5xZYNoebbYGRLZZMa6bXV1aPev13sS4JAkQ+kJ5MKlijK8BRjPf40imUpCPx+V8gdtppR+C6G2jXWK9mcOoA7iurSTjROf2x3Uo5pTPkK8n9W7oDjjReDtTWoGA6LZOTdfOHS5mHPlVlKT3+Dm2AC18VnVIS4pKm+5WIqavw3pcy54fQlGVPjFQKD1qUwj5qxhL6eBnGHzjpU4FXBjRPKcXZjprW6Ih7ixP28++o+404nAlCvM4cjKeydokH0=) 2026-01-17 00:23:18.275980 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIPwWgTclbp/i3sMEeOXGB0gdGVlMioM9fKfG1Q3en/X0) 2026-01-17 00:23:18.275998 | orchestrator | 2026-01-17 00:23:18.276011 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-01-17 00:23:18.276024 | orchestrator | Saturday 17 January 2026 00:23:07 +0000 (0:00:01.059) 0:00:13.034 ****** 2026-01-17 00:23:18.276035 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBPEbWhoTMc3tFT7Ds6dU0B0kxsMiNR/v4fsF2mx+ZM3wXH21vC/lmDF3gd42+oNF4tPeGo+bcrxhImT0HrZxxmE=) 2026-01-17 00:23:18.276048 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAILDHStI/D2b93659uZ1N9jMojhSRfSmN0xvoN74OcrPV) 2026-01-17 00:23:18.276060 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDPDa+J6DOj0Y7O449WH3go7R3ukGXAR49TmUkUWxtBx+ZEWzwsoM0EfX8dCYiQXt5mI/4WaIL3KWbPqJGI/N4iXWqOblKxScAV5LY2oOkOBL2UPzvKrTHaqXhMKs+BMW+Wiz1tUwowzPG4DkvhBlmY/+D0wndofcpv9BLPhgqMgQ4Akbw1t1iCIyLvGqS8c8R364aN6FblAK8q3ITCkWG0oF+pFykc5pY5hE09w35DuEcr1jaXidfastIfOhblecUqEgsiwfgYo//kWSWS7WDT5PJ8Mi3JfB8BaxoCNrWEmSMXt70o0OsmRzr4oKZuPr/Z+9Xkqrb68hKVrgFHXXBe/OTKfJTBMhpDD/12tgXGY+SqTe9coLcWd42gznaHO2K/AZ1vMIor1lZrBddBKYW0gUBI2FmttM/daYXszS3XssA7+g8vTMid6UVY9c+HoORcfof2ze+zNPdXbdvo7d74QA2VWyV6fyQBbnnw+TFoIcRe/EMrCvaOHWup0xNVk8E=) 2026-01-17 00:23:18.276073 | orchestrator | 2026-01-17 00:23:18.276084 | orchestrator | TASK [osism.commons.known_hosts : Run ssh-keyscan for all hosts with ansible_host] *** 2026-01-17 00:23:18.276096 | orchestrator | Saturday 17 January 2026 00:23:08 +0000 (0:00:01.074) 0:00:14.108 ****** 2026-01-17 00:23:18.276108 | orchestrator | ok: [testbed-manager] => (item=testbed-manager) 2026-01-17 00:23:18.276120 | orchestrator | ok: [testbed-manager] => (item=testbed-node-3) 2026-01-17 00:23:18.276131 | orchestrator | ok: [testbed-manager] => (item=testbed-node-4) 2026-01-17 00:23:18.276141 | orchestrator | ok: [testbed-manager] => (item=testbed-node-5) 2026-01-17 00:23:18.276152 | orchestrator | ok: [testbed-manager] => (item=testbed-node-0) 2026-01-17 00:23:18.276163 | orchestrator | ok: [testbed-manager] => (item=testbed-node-1) 2026-01-17 00:23:18.276206 | orchestrator | ok: [testbed-manager] => (item=testbed-node-2) 2026-01-17 00:23:18.276217 | orchestrator | 2026-01-17 00:23:18.276228 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with ansible_host] *** 2026-01-17 00:23:18.276265 | orchestrator | Saturday 17 January 2026 00:23:13 +0000 (0:00:05.371) 0:00:19.480 ****** 2026-01-17 00:23:18.276277 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-manager) 2026-01-17 00:23:18.276290 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-3) 2026-01-17 00:23:18.276301 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-4) 2026-01-17 00:23:18.276329 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-5) 2026-01-17 00:23:18.276341 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-0) 2026-01-17 00:23:18.276352 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-1) 2026-01-17 00:23:18.276363 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-2) 2026-01-17 00:23:18.276374 | orchestrator | 2026-01-17 00:23:18.276405 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-01-17 00:23:18.276418 | orchestrator | Saturday 17 January 2026 00:23:13 +0000 (0:00:00.182) 0:00:19.662 ****** 2026-01-17 00:23:18.276431 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIPk+T6TsMAp3nYvbpDXhnhNSSLGuR5KCQZRpTcA/+BoC) 2026-01-17 00:23:18.276446 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDKbvzONkiiGUtKsSUVs4rqmbXIywahtDMmOXxwss4u3WnoP3TVYdoN01JE+BbW2EDDq8uJ9GabZ7oHAlP0U6q2OxB6CsBJC8giSMIJvkE2xLw3GSn9s48T+g6amUIwxBIMjF1nxIinOENLrKDyh3MDvGqcJ8VL10bzcx8g3lcGFwQ+Nlmsc0VRsku9Kipf35oyb07hUfgOPnI58DhgWgBnrjaxsakQCA8fJfQSuV72QyHG5++5V+AAss1PgYbQKQ27UD96RPSQmF323EpQY/pQFTnLLI5T6l2Hkg17e9f7Ys1sicJtIj9QyZ2nPwaU1EkdTo77Q9RnOPTRvnCMF/SZdE4WfVWHvxrdUz5SQ6Zlwe7US6rfa2xXAjFFTcbAPIuFOQqe5QpqaqgtzL9+EFBUwVQXsSir9okzReNyQuu5vX+4JpgQqW/AuJjGiq+0wA7cLyUOfYvoEZfvoGptHDg/+l4LGHWhOR+Mbsy1f5r7RGrfBjKA+XK5aQIi4vGuBKk=) 2026-01-17 00:23:18.276460 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBHSCC7UJ4xhhMEwMA1FGC3JpyjMm+WnRJmql/AY5tlgXpo5VD8CrKdn8UYLtZ40QhcoV7sX45WF0CiiMzC0Whtk=) 2026-01-17 00:23:18.276472 | orchestrator | 2026-01-17 00:23:18.276484 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-01-17 00:23:18.276497 | orchestrator | Saturday 17 January 2026 00:23:14 +0000 (0:00:01.066) 0:00:20.728 ****** 2026-01-17 00:23:18.276509 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIEnG9PdznQN9c6RohH9blDjz7QbNyHvLOQfX3NiCOm15) 2026-01-17 00:23:18.276522 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDIcIduz4/LGoe+oxezUscw3Px4K/nclaea1g0OFjHYiv1sq5zbRnPNXNS4C77hcmMjyKXc2ld86ywX3kFHF6Q/keEJJNsin9VRlUAsbMmxaLEymq6qmWMvC/XTBvYTgHVKi+wPjC1l9kLe6Imu97/VvcIJaaXIWfcEMo6cXxL2XmNjedrTmWpcncE5Lv57Z4QIILvTdyZv+8GCPVG2U13D6aOKbuFYHNOsZENLkK58rpqqGdsY4sa5ufviP0Bt8xXaXC9s9XoOGl2/D8a2JUCX2XMUgTxMcLA0M7jye1/LKyj7Fsjd7MXzRqO3Zwp4m9DQ5NkKPOorU/pvUn3e3JwP5AkxNU3yxnfSezhMYzMAg5vmv+mKVYR58QuS4ub8hhCEfjOMr8wf/35XOdpCLPWs+7wWyWfifTTn+Q0l3jdQ8iTnOs2xLLOMPV+INddRBEEe3PMKnE8NhOh8EOIaPge+SFDfTbvZrMYJEgFpN1Nrl9DaBeba1wn1es2qWfReT8E=) 2026-01-17 00:23:18.276543 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBI4B0ngBwmHGyBuMbqVsgJnLSwDMQKC8PXhZQsDhP7jTIBk4R5iXW/lXUNRKIHrFle2aHg98IA+jid9V8XidwN0=) 2026-01-17 00:23:18.276555 | orchestrator | 2026-01-17 00:23:18.276567 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-01-17 00:23:18.276579 | orchestrator | Saturday 17 January 2026 00:23:16 +0000 (0:00:01.184) 0:00:21.913 ****** 2026-01-17 00:23:18.276592 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCtPxvo4ipyJcLs73nsByH5S855a31BWsi7DlX0hiEE65jyTaDkTZI0moZLEwz5wphxw4a/1DpgRSYYmZwDcDXWyjWjjM1HWkjC/kbVZbCpniVCW6ueM+T6aEbNm2gA4TZbtY9d4mfORGYQ5vQSiVXfoP0AwdXi+CTjGpD+Tqswx5iFfPqk7P/7QcBlNxg5wNabjBnNFiOij0gQBeAfQ5cZrozn/l1Wyahl2jDL17/hb8p1OCJnavLVXzA1BYLtnIUC7tDqywJqaSgttKPStONTzd9wwJ3HOdWmcuLZWUa4vsTKzUPs2n5O8GmDTKAIS2AeiR6NnFje8gaBBeUDNhJ/cZt8exl/gXsn3OBFbJFSZaWsqdeQEwYVbw97tWRy1xEO0ufkKfl3SmiMJ+qOnJnvDD9ugDJ5dnYw+Sqp+FrU57he7zdNe72H/PBB0C8+NgDTuYoulBuTC8TkA73Nm42cI5/o7mZjvI+egi88nOQMu2035s6hSBfjC5Zf4t0xMVs=) 2026-01-17 00:23:18.276605 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBDGzt+GyD+Jr3bmV25yncEOFqy5rO3zV3EphWjhp7biEeRCTUy0gg2iAAAtta9Isa6xQ2x+kNJDazvXhTg2V4pU=) 2026-01-17 00:23:18.276618 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAINWDWaku6jZj+GP5VuhwGAJIvDrRC3tQpdRK2/5AA8Zs) 2026-01-17 00:23:18.276631 | orchestrator | 2026-01-17 00:23:18.276643 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-01-17 00:23:18.276655 | orchestrator | Saturday 17 January 2026 00:23:17 +0000 (0:00:01.089) 0:00:23.002 ****** 2026-01-17 00:23:18.276667 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBEKUdrgpcPTqC3BimTcvNd8evyBrUmetgkqAZRjPrdUlIY0NQfuk5vr4Md7VFdjrGEu3mCkRVnpv7OjporntV4k=) 2026-01-17 00:23:18.276680 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIAWEcEzwCjzLjyWZgr+nftUyKHqMaIgYdplg8IyrGl3B) 2026-01-17 00:23:18.276711 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDFHobcmM4FCOnt8CvZLgfd/ax0zR43IsUg1HXwfSbb69B3Q7Wja8rd6ROtz+bmVxV9gHsWfwjydLX7QreQjZn5lJCs1T4kS30xaWVQUtlRXFOK20Lo4pjAUyKq2X/OnEBF8yd1R+PvM/1EXI/Rupa0RbjmtU7WgvnW18GGxcRLb5CD/BXol561hjXAWBaUl4vg/3dHtt/vX7CiPQzCw1IsyxV/nDtU5HtaRqN5XkcclkNiEGG9Eo7PtPyqntIA7PKnErJmmmGUl+tvnTGpdQHFAGeiY91QrWXIycDCwH9K6HlX/CS8eAxBHSx6gY7THyHLs/JhuFP8zXH42o6o/yheMlvBwxPRh3ctr6KyVoKKgOqzvPTVIGIdyqxDN0/Sr/dDF0KUA9hgt9pOwvxT6knl6G1rDSMhlSYsECutj2ZMQO6ur8a6tU6xeMnq8qPQHbC2ENNyeg2lllL3Nzt/D2xhtdecwuc8NlY1F4GHGcnBFxSsIBRnUTadnNEQhXNmY4M=) 2026-01-17 00:23:22.818593 | orchestrator | 2026-01-17 00:23:22.818696 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-01-17 00:23:22.818710 | orchestrator | Saturday 17 January 2026 00:23:18 +0000 (0:00:01.053) 0:00:24.055 ****** 2026-01-17 00:23:22.818738 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCvxXdD4YH76KHbOgSUglOZ2+jJeBGhOtNSysoIEjQN1kE3wms9Ly1Ie0dT+p+YuCO9bMAbrVqbcjP/SBvpAkFCKVdjsCIiQbOjU72fUlSoCPhz3Pwqpq+3oX79bbJfUpYIwINK2/taCR5iVv6iMfQiSbnrXxZrgiAfrwhfNtmLhvHw9SEX611Cr8WmNb/pDalT9bZGRukUayZoqZ3gV5IEdzY9+PyQ0cd01i5EHMIZUV6QofJvNBc2sKredlYJystbD6Si7FID2NVjTdNc3iTZz0jDHPDbS/blQKgOTpBbqjr2498Up4BLp7hz+R+Q7eGVRw378Ft2Kubgr3RfYWD2h4NXAnR02YQ77Z/zBkZrdN7l5Cs8+uxuL48VRoFKCY9/MBBJpxnQHrmxe51vAOiNwj3gjcNDjAMmAu1Co32QcCxvH1F9XwBCgaUVekJFSWOfZhhg0Xof6dYdUxTuHOsivhRB3I2yoDUgf27hsAt9XSybS1N0C+JPfIPd34uAvBU=) 2026-01-17 00:23:22.818751 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBMYLZz00M55GuOk0xy00G/GPVi0nC4sE0a5DNxKvxCIXWAgth648MXTJmbm7N/H21qp5diJE/4AEUR+VUgdWtO4=) 2026-01-17 00:23:22.818761 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIKU6esRQ/z+EOHyVxgQApSimmMHqHLMXfjljGV2W0r/2) 2026-01-17 00:23:22.818790 | orchestrator | 2026-01-17 00:23:22.818802 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-01-17 00:23:22.818811 | orchestrator | Saturday 17 January 2026 00:23:19 +0000 (0:00:01.065) 0:00:25.121 ****** 2026-01-17 00:23:22.818819 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIPwWgTclbp/i3sMEeOXGB0gdGVlMioM9fKfG1Q3en/X0) 2026-01-17 00:23:22.818827 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCyPK7L3ZgVZSH1q9YwtpfXp1PUdc5moUbtCG/iIr77WoKwIvlJ6DKfvmKr3XtRZ63vmIRKVKT0t7XwHDBNVaTSF+164zI84aj5e9AmWrP8qVipNj/9UwGdGPVom/IYgKzX/wtPVVEgXZ+k3UzWHfDkLbRk/CkSU3cYRexMmKCJiPHco7OxqzEK/NBtRnERzjeiEmGblKUjVNO/zo2MmlLL6uhoY1mpwSCX3gZwy7XDeO3gFO3byV8rw74kEf2gqWwYfy5xZYNoebbYGRLZZMa6bXV1aPev13sS4JAkQ+kJ5MKlijK8BRjPf40imUpCPx+V8gdtppR+C6G2jXWK9mcOoA7iurSTjROf2x3Uo5pTPkK8n9W7oDjjReDtTWoGA6LZOTdfOHS5mHPlVlKT3+Dm2AC18VnVIS4pKm+5WIqavw3pcy54fQlGVPjFQKD1qUwj5qxhL6eBnGHzjpU4FXBjRPKcXZjprW6Ih7ixP28++o+404nAlCvM4cjKeydokH0=) 2026-01-17 00:23:22.818835 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBEZ325QaTWVPei+zJVTZ/jGjdRZN6RBAYsOoZKSutiRK2YRu30REbLApDNsj4N1ZIpQa2p7Q4MONWNJ9Wuwx6pQ=) 2026-01-17 00:23:22.818843 | orchestrator | 2026-01-17 00:23:22.818851 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-01-17 00:23:22.818859 | orchestrator | Saturday 17 January 2026 00:23:20 +0000 (0:00:01.117) 0:00:26.239 ****** 2026-01-17 00:23:22.818867 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAILDHStI/D2b93659uZ1N9jMojhSRfSmN0xvoN74OcrPV) 2026-01-17 00:23:22.818875 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDPDa+J6DOj0Y7O449WH3go7R3ukGXAR49TmUkUWxtBx+ZEWzwsoM0EfX8dCYiQXt5mI/4WaIL3KWbPqJGI/N4iXWqOblKxScAV5LY2oOkOBL2UPzvKrTHaqXhMKs+BMW+Wiz1tUwowzPG4DkvhBlmY/+D0wndofcpv9BLPhgqMgQ4Akbw1t1iCIyLvGqS8c8R364aN6FblAK8q3ITCkWG0oF+pFykc5pY5hE09w35DuEcr1jaXidfastIfOhblecUqEgsiwfgYo//kWSWS7WDT5PJ8Mi3JfB8BaxoCNrWEmSMXt70o0OsmRzr4oKZuPr/Z+9Xkqrb68hKVrgFHXXBe/OTKfJTBMhpDD/12tgXGY+SqTe9coLcWd42gznaHO2K/AZ1vMIor1lZrBddBKYW0gUBI2FmttM/daYXszS3XssA7+g8vTMid6UVY9c+HoORcfof2ze+zNPdXbdvo7d74QA2VWyV6fyQBbnnw+TFoIcRe/EMrCvaOHWup0xNVk8E=) 2026-01-17 00:23:22.818883 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBPEbWhoTMc3tFT7Ds6dU0B0kxsMiNR/v4fsF2mx+ZM3wXH21vC/lmDF3gd42+oNF4tPeGo+bcrxhImT0HrZxxmE=) 2026-01-17 00:23:22.818891 | orchestrator | 2026-01-17 00:23:22.818899 | orchestrator | TASK [osism.commons.known_hosts : Write static known_hosts entries] ************ 2026-01-17 00:23:22.818907 | orchestrator | Saturday 17 January 2026 00:23:21 +0000 (0:00:01.111) 0:00:27.350 ****** 2026-01-17 00:23:22.818916 | orchestrator | skipping: [testbed-manager] => (item=testbed-manager)  2026-01-17 00:23:22.818924 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-3)  2026-01-17 00:23:22.818932 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-4)  2026-01-17 00:23:22.818940 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-5)  2026-01-17 00:23:22.818947 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2026-01-17 00:23:22.818955 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-1)  2026-01-17 00:23:22.818963 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-2)  2026-01-17 00:23:22.818971 | orchestrator | skipping: [testbed-manager] 2026-01-17 00:23:22.818979 | orchestrator | 2026-01-17 00:23:22.819002 | orchestrator | TASK [osism.commons.known_hosts : Write extra known_hosts entries] ************* 2026-01-17 00:23:22.819010 | orchestrator | Saturday 17 January 2026 00:23:21 +0000 (0:00:00.162) 0:00:27.513 ****** 2026-01-17 00:23:22.819018 | orchestrator | skipping: [testbed-manager] 2026-01-17 00:23:22.819026 | orchestrator | 2026-01-17 00:23:22.819040 | orchestrator | TASK [osism.commons.known_hosts : Delete known_hosts entries] ****************** 2026-01-17 00:23:22.819047 | orchestrator | Saturday 17 January 2026 00:23:21 +0000 (0:00:00.061) 0:00:27.575 ****** 2026-01-17 00:23:22.819055 | orchestrator | skipping: [testbed-manager] 2026-01-17 00:23:22.819063 | orchestrator | 2026-01-17 00:23:22.819071 | orchestrator | TASK [osism.commons.known_hosts : Set file permissions] ************************ 2026-01-17 00:23:22.819078 | orchestrator | Saturday 17 January 2026 00:23:21 +0000 (0:00:00.052) 0:00:27.627 ****** 2026-01-17 00:23:22.819086 | orchestrator | changed: [testbed-manager] 2026-01-17 00:23:22.819100 | orchestrator | 2026-01-17 00:23:22.819114 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-17 00:23:22.819128 | orchestrator | testbed-manager : ok=31  changed=15  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-01-17 00:23:22.819142 | orchestrator | 2026-01-17 00:23:22.819155 | orchestrator | 2026-01-17 00:23:22.819200 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-17 00:23:22.819215 | orchestrator | Saturday 17 January 2026 00:23:22 +0000 (0:00:00.762) 0:00:28.389 ****** 2026-01-17 00:23:22.819230 | orchestrator | =============================================================================== 2026-01-17 00:23:22.819245 | orchestrator | osism.commons.known_hosts : Run ssh-keyscan for all hosts with hostname --- 6.05s 2026-01-17 00:23:22.819260 | orchestrator | osism.commons.known_hosts : Run ssh-keyscan for all hosts with ansible_host --- 5.37s 2026-01-17 00:23:22.819275 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.18s 2026-01-17 00:23:22.819290 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.18s 2026-01-17 00:23:22.819305 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.12s 2026-01-17 00:23:22.819320 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.11s 2026-01-17 00:23:22.819335 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.11s 2026-01-17 00:23:22.819350 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.10s 2026-01-17 00:23:22.819365 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.09s 2026-01-17 00:23:22.819379 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.09s 2026-01-17 00:23:22.819393 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.07s 2026-01-17 00:23:22.819403 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.07s 2026-01-17 00:23:22.819411 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.07s 2026-01-17 00:23:22.819420 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.06s 2026-01-17 00:23:22.819430 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.06s 2026-01-17 00:23:22.819438 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.05s 2026-01-17 00:23:22.819447 | orchestrator | osism.commons.known_hosts : Set file permissions ------------------------ 0.76s 2026-01-17 00:23:22.819456 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with ansible_host --- 0.18s 2026-01-17 00:23:22.819465 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with hostname --- 0.18s 2026-01-17 00:23:22.819475 | orchestrator | osism.commons.known_hosts : Write static known_hosts entries ------------ 0.16s 2026-01-17 00:23:23.147427 | orchestrator | + osism apply squid 2026-01-17 00:23:35.212029 | orchestrator | 2026-01-17 00:23:35 | INFO  | Task 4b6f36e2-de7f-45c6-9d19-6067d5c2f45c (squid) was prepared for execution. 2026-01-17 00:23:35.212143 | orchestrator | 2026-01-17 00:23:35 | INFO  | It takes a moment until task 4b6f36e2-de7f-45c6-9d19-6067d5c2f45c (squid) has been started and output is visible here. 2026-01-17 00:25:33.029759 | orchestrator | 2026-01-17 00:25:33.029874 | orchestrator | PLAY [Apply role squid] ******************************************************** 2026-01-17 00:25:33.029919 | orchestrator | 2026-01-17 00:25:33.029932 | orchestrator | TASK [osism.services.squid : Include install tasks] **************************** 2026-01-17 00:25:33.029959 | orchestrator | Saturday 17 January 2026 00:23:39 +0000 (0:00:00.173) 0:00:00.173 ****** 2026-01-17 00:25:33.029970 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/squid/tasks/install-Debian-family.yml for testbed-manager 2026-01-17 00:25:33.029982 | orchestrator | 2026-01-17 00:25:33.029993 | orchestrator | TASK [osism.services.squid : Install required packages] ************************ 2026-01-17 00:25:33.030004 | orchestrator | Saturday 17 January 2026 00:23:39 +0000 (0:00:00.090) 0:00:00.264 ****** 2026-01-17 00:25:33.030130 | orchestrator | ok: [testbed-manager] 2026-01-17 00:25:33.030149 | orchestrator | 2026-01-17 00:25:33.030160 | orchestrator | TASK [osism.services.squid : Create required directories] ********************** 2026-01-17 00:25:33.030170 | orchestrator | Saturday 17 January 2026 00:23:41 +0000 (0:00:01.530) 0:00:01.794 ****** 2026-01-17 00:25:33.030182 | orchestrator | changed: [testbed-manager] => (item=/opt/squid/configuration) 2026-01-17 00:25:33.030193 | orchestrator | changed: [testbed-manager] => (item=/opt/squid/configuration/conf.d) 2026-01-17 00:25:33.030203 | orchestrator | ok: [testbed-manager] => (item=/opt/squid) 2026-01-17 00:25:33.030213 | orchestrator | 2026-01-17 00:25:33.030223 | orchestrator | TASK [osism.services.squid : Copy squid configuration files] ******************* 2026-01-17 00:25:33.030233 | orchestrator | Saturday 17 January 2026 00:23:42 +0000 (0:00:01.176) 0:00:02.970 ****** 2026-01-17 00:25:33.030243 | orchestrator | changed: [testbed-manager] => (item=osism.conf) 2026-01-17 00:25:33.030255 | orchestrator | 2026-01-17 00:25:33.030265 | orchestrator | TASK [osism.services.squid : Remove osism_allow_list.conf configuration file] *** 2026-01-17 00:25:33.030275 | orchestrator | Saturday 17 January 2026 00:23:43 +0000 (0:00:01.107) 0:00:04.078 ****** 2026-01-17 00:25:33.030282 | orchestrator | ok: [testbed-manager] 2026-01-17 00:25:33.030288 | orchestrator | 2026-01-17 00:25:33.030295 | orchestrator | TASK [osism.services.squid : Copy docker-compose.yml file] ********************* 2026-01-17 00:25:33.030302 | orchestrator | Saturday 17 January 2026 00:23:43 +0000 (0:00:00.367) 0:00:04.446 ****** 2026-01-17 00:25:33.030309 | orchestrator | changed: [testbed-manager] 2026-01-17 00:25:33.030316 | orchestrator | 2026-01-17 00:25:33.030323 | orchestrator | TASK [osism.services.squid : Manage squid service] ***************************** 2026-01-17 00:25:33.030330 | orchestrator | Saturday 17 January 2026 00:23:44 +0000 (0:00:00.940) 0:00:05.387 ****** 2026-01-17 00:25:33.030337 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage squid service (10 retries left). 2026-01-17 00:25:33.030345 | orchestrator | ok: [testbed-manager] 2026-01-17 00:25:33.030352 | orchestrator | 2026-01-17 00:25:33.030360 | orchestrator | RUNNING HANDLER [osism.services.squid : Restart squid service] ***************** 2026-01-17 00:25:33.030367 | orchestrator | Saturday 17 January 2026 00:24:19 +0000 (0:00:35.227) 0:00:40.614 ****** 2026-01-17 00:25:33.030374 | orchestrator | changed: [testbed-manager] 2026-01-17 00:25:33.030381 | orchestrator | 2026-01-17 00:25:33.030388 | orchestrator | RUNNING HANDLER [osism.services.squid : Wait for squid service to start] ******* 2026-01-17 00:25:33.030405 | orchestrator | Saturday 17 January 2026 00:24:31 +0000 (0:00:12.031) 0:00:52.646 ****** 2026-01-17 00:25:33.030417 | orchestrator | Pausing for 60 seconds 2026-01-17 00:25:33.030427 | orchestrator | changed: [testbed-manager] 2026-01-17 00:25:33.030439 | orchestrator | 2026-01-17 00:25:33.030450 | orchestrator | RUNNING HANDLER [osism.services.squid : Register that squid service was restarted] *** 2026-01-17 00:25:33.030461 | orchestrator | Saturday 17 January 2026 00:25:32 +0000 (0:01:00.099) 0:01:52.746 ****** 2026-01-17 00:25:33.030472 | orchestrator | ok: [testbed-manager] 2026-01-17 00:25:33.030483 | orchestrator | 2026-01-17 00:25:33.030494 | orchestrator | RUNNING HANDLER [osism.services.squid : Wait for an healthy squid service] ***** 2026-01-17 00:25:33.030501 | orchestrator | Saturday 17 January 2026 00:25:32 +0000 (0:00:00.063) 0:01:52.810 ****** 2026-01-17 00:25:33.030508 | orchestrator | changed: [testbed-manager] 2026-01-17 00:25:33.030515 | orchestrator | 2026-01-17 00:25:33.030523 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-17 00:25:33.030539 | orchestrator | testbed-manager : ok=11  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-17 00:25:33.030547 | orchestrator | 2026-01-17 00:25:33.030555 | orchestrator | 2026-01-17 00:25:33.030562 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-17 00:25:33.030569 | orchestrator | Saturday 17 January 2026 00:25:32 +0000 (0:00:00.670) 0:01:53.481 ****** 2026-01-17 00:25:33.030575 | orchestrator | =============================================================================== 2026-01-17 00:25:33.030581 | orchestrator | osism.services.squid : Wait for squid service to start ----------------- 60.10s 2026-01-17 00:25:33.030587 | orchestrator | osism.services.squid : Manage squid service ---------------------------- 35.23s 2026-01-17 00:25:33.030594 | orchestrator | osism.services.squid : Restart squid service --------------------------- 12.03s 2026-01-17 00:25:33.030600 | orchestrator | osism.services.squid : Install required packages ------------------------ 1.53s 2026-01-17 00:25:33.030606 | orchestrator | osism.services.squid : Create required directories ---------------------- 1.18s 2026-01-17 00:25:33.030612 | orchestrator | osism.services.squid : Copy squid configuration files ------------------- 1.11s 2026-01-17 00:25:33.030618 | orchestrator | osism.services.squid : Copy docker-compose.yml file --------------------- 0.94s 2026-01-17 00:25:33.030625 | orchestrator | osism.services.squid : Wait for an healthy squid service ---------------- 0.67s 2026-01-17 00:25:33.030631 | orchestrator | osism.services.squid : Remove osism_allow_list.conf configuration file --- 0.37s 2026-01-17 00:25:33.030637 | orchestrator | osism.services.squid : Include install tasks ---------------------------- 0.09s 2026-01-17 00:25:33.030643 | orchestrator | osism.services.squid : Register that squid service was restarted -------- 0.06s 2026-01-17 00:25:33.369105 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2026-01-17 00:25:33.369221 | orchestrator | + /opt/configuration/scripts/set-kolla-namespace.sh kolla 2026-01-17 00:25:33.375907 | orchestrator | + set -e 2026-01-17 00:25:33.376003 | orchestrator | + NAMESPACE=kolla 2026-01-17 00:25:33.376045 | orchestrator | + sed -i 's#docker_namespace: .*#docker_namespace: kolla#g' /opt/configuration/inventory/group_vars/all/kolla.yml 2026-01-17 00:25:33.383236 | orchestrator | ++ semver latest 9.0.0 2026-01-17 00:25:33.442282 | orchestrator | + [[ -1 -lt 0 ]] 2026-01-17 00:25:33.442369 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2026-01-17 00:25:33.442989 | orchestrator | + osism apply operator -u ubuntu -l testbed-nodes 2026-01-17 00:25:45.659584 | orchestrator | 2026-01-17 00:25:45 | INFO  | Task 86a89899-2a65-451b-b733-cb0c74d6aa9f (operator) was prepared for execution. 2026-01-17 00:25:45.659688 | orchestrator | 2026-01-17 00:25:45 | INFO  | It takes a moment until task 86a89899-2a65-451b-b733-cb0c74d6aa9f (operator) has been started and output is visible here. 2026-01-17 00:26:01.695725 | orchestrator | 2026-01-17 00:26:01.695846 | orchestrator | PLAY [Make ssh pipelining working] ********************************************* 2026-01-17 00:26:01.695863 | orchestrator | 2026-01-17 00:26:01.695875 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-01-17 00:26:01.695887 | orchestrator | Saturday 17 January 2026 00:25:49 +0000 (0:00:00.144) 0:00:00.144 ****** 2026-01-17 00:26:01.695898 | orchestrator | ok: [testbed-node-2] 2026-01-17 00:26:01.695911 | orchestrator | ok: [testbed-node-0] 2026-01-17 00:26:01.695923 | orchestrator | ok: [testbed-node-1] 2026-01-17 00:26:01.695934 | orchestrator | ok: [testbed-node-5] 2026-01-17 00:26:01.695945 | orchestrator | ok: [testbed-node-3] 2026-01-17 00:26:01.695956 | orchestrator | ok: [testbed-node-4] 2026-01-17 00:26:01.695966 | orchestrator | 2026-01-17 00:26:01.695982 | orchestrator | TASK [Do not require tty for all users] **************************************** 2026-01-17 00:26:01.696058 | orchestrator | Saturday 17 January 2026 00:25:53 +0000 (0:00:03.299) 0:00:03.443 ****** 2026-01-17 00:26:01.696072 | orchestrator | ok: [testbed-node-5] 2026-01-17 00:26:01.696083 | orchestrator | ok: [testbed-node-0] 2026-01-17 00:26:01.696093 | orchestrator | ok: [testbed-node-4] 2026-01-17 00:26:01.696104 | orchestrator | ok: [testbed-node-1] 2026-01-17 00:26:01.696114 | orchestrator | ok: [testbed-node-3] 2026-01-17 00:26:01.696148 | orchestrator | ok: [testbed-node-2] 2026-01-17 00:26:01.696160 | orchestrator | 2026-01-17 00:26:01.696171 | orchestrator | PLAY [Apply role operator] ***************************************************** 2026-01-17 00:26:01.696182 | orchestrator | 2026-01-17 00:26:01.696193 | orchestrator | TASK [osism.commons.operator : Gather variables for each operating system] ***** 2026-01-17 00:26:01.696203 | orchestrator | Saturday 17 January 2026 00:25:54 +0000 (0:00:00.857) 0:00:04.301 ****** 2026-01-17 00:26:01.696214 | orchestrator | ok: [testbed-node-0] 2026-01-17 00:26:01.696225 | orchestrator | ok: [testbed-node-1] 2026-01-17 00:26:01.696235 | orchestrator | ok: [testbed-node-2] 2026-01-17 00:26:01.696246 | orchestrator | ok: [testbed-node-3] 2026-01-17 00:26:01.696256 | orchestrator | ok: [testbed-node-4] 2026-01-17 00:26:01.696267 | orchestrator | ok: [testbed-node-5] 2026-01-17 00:26:01.696277 | orchestrator | 2026-01-17 00:26:01.696288 | orchestrator | TASK [osism.commons.operator : Set operator_groups variable to default value] *** 2026-01-17 00:26:01.696299 | orchestrator | Saturday 17 January 2026 00:25:54 +0000 (0:00:00.167) 0:00:04.469 ****** 2026-01-17 00:26:01.696310 | orchestrator | ok: [testbed-node-0] 2026-01-17 00:26:01.696320 | orchestrator | ok: [testbed-node-1] 2026-01-17 00:26:01.696331 | orchestrator | ok: [testbed-node-2] 2026-01-17 00:26:01.696341 | orchestrator | ok: [testbed-node-3] 2026-01-17 00:26:01.696352 | orchestrator | ok: [testbed-node-4] 2026-01-17 00:26:01.696363 | orchestrator | ok: [testbed-node-5] 2026-01-17 00:26:01.696373 | orchestrator | 2026-01-17 00:26:01.696384 | orchestrator | TASK [osism.commons.operator : Create operator group] ************************** 2026-01-17 00:26:01.696395 | orchestrator | Saturday 17 January 2026 00:25:54 +0000 (0:00:00.186) 0:00:04.656 ****** 2026-01-17 00:26:01.696406 | orchestrator | changed: [testbed-node-3] 2026-01-17 00:26:01.696417 | orchestrator | changed: [testbed-node-5] 2026-01-17 00:26:01.696427 | orchestrator | changed: [testbed-node-4] 2026-01-17 00:26:01.696438 | orchestrator | changed: [testbed-node-0] 2026-01-17 00:26:01.696449 | orchestrator | changed: [testbed-node-2] 2026-01-17 00:26:01.696460 | orchestrator | changed: [testbed-node-1] 2026-01-17 00:26:01.696470 | orchestrator | 2026-01-17 00:26:01.696481 | orchestrator | TASK [osism.commons.operator : Create user] ************************************ 2026-01-17 00:26:01.696492 | orchestrator | Saturday 17 January 2026 00:25:55 +0000 (0:00:00.605) 0:00:05.261 ****** 2026-01-17 00:26:01.696502 | orchestrator | changed: [testbed-node-4] 2026-01-17 00:26:01.696518 | orchestrator | changed: [testbed-node-3] 2026-01-17 00:26:01.696536 | orchestrator | changed: [testbed-node-5] 2026-01-17 00:26:01.696552 | orchestrator | changed: [testbed-node-2] 2026-01-17 00:26:01.696569 | orchestrator | changed: [testbed-node-0] 2026-01-17 00:26:01.696587 | orchestrator | changed: [testbed-node-1] 2026-01-17 00:26:01.696605 | orchestrator | 2026-01-17 00:26:01.696625 | orchestrator | TASK [osism.commons.operator : Add user to additional groups] ****************** 2026-01-17 00:26:01.696642 | orchestrator | Saturday 17 January 2026 00:25:55 +0000 (0:00:00.884) 0:00:06.146 ****** 2026-01-17 00:26:01.696660 | orchestrator | changed: [testbed-node-0] => (item=adm) 2026-01-17 00:26:01.696680 | orchestrator | changed: [testbed-node-1] => (item=adm) 2026-01-17 00:26:01.696699 | orchestrator | changed: [testbed-node-2] => (item=adm) 2026-01-17 00:26:01.696714 | orchestrator | changed: [testbed-node-3] => (item=adm) 2026-01-17 00:26:01.696724 | orchestrator | changed: [testbed-node-4] => (item=adm) 2026-01-17 00:26:01.696735 | orchestrator | changed: [testbed-node-5] => (item=adm) 2026-01-17 00:26:01.696746 | orchestrator | changed: [testbed-node-2] => (item=sudo) 2026-01-17 00:26:01.696756 | orchestrator | changed: [testbed-node-0] => (item=sudo) 2026-01-17 00:26:01.696786 | orchestrator | changed: [testbed-node-1] => (item=sudo) 2026-01-17 00:26:01.696797 | orchestrator | changed: [testbed-node-3] => (item=sudo) 2026-01-17 00:26:01.696808 | orchestrator | changed: [testbed-node-4] => (item=sudo) 2026-01-17 00:26:01.696819 | orchestrator | changed: [testbed-node-5] => (item=sudo) 2026-01-17 00:26:01.696829 | orchestrator | 2026-01-17 00:26:01.696840 | orchestrator | TASK [osism.commons.operator : Copy user sudoers file] ************************* 2026-01-17 00:26:01.696861 | orchestrator | Saturday 17 January 2026 00:25:57 +0000 (0:00:01.272) 0:00:07.418 ****** 2026-01-17 00:26:01.696872 | orchestrator | changed: [testbed-node-4] 2026-01-17 00:26:01.696944 | orchestrator | changed: [testbed-node-1] 2026-01-17 00:26:01.696968 | orchestrator | changed: [testbed-node-5] 2026-01-17 00:26:01.696987 | orchestrator | changed: [testbed-node-0] 2026-01-17 00:26:01.697028 | orchestrator | changed: [testbed-node-2] 2026-01-17 00:26:01.697049 | orchestrator | changed: [testbed-node-3] 2026-01-17 00:26:01.697067 | orchestrator | 2026-01-17 00:26:01.697086 | orchestrator | TASK [osism.commons.operator : Set language variables in .bashrc configuration file] *** 2026-01-17 00:26:01.697099 | orchestrator | Saturday 17 January 2026 00:25:58 +0000 (0:00:01.198) 0:00:08.617 ****** 2026-01-17 00:26:01.697109 | orchestrator | [WARNING]: Module remote_tmp /root/.ansible/tmp did not exist and was created 2026-01-17 00:26:01.697120 | orchestrator | with a mode of 0700, this may cause issues when running as another user. To 2026-01-17 00:26:01.697131 | orchestrator | avoid this, create the remote_tmp dir with the correct permissions manually 2026-01-17 00:26:01.697142 | orchestrator | changed: [testbed-node-0] => (item=export LANGUAGE=C.UTF-8) 2026-01-17 00:26:01.697174 | orchestrator | changed: [testbed-node-3] => (item=export LANGUAGE=C.UTF-8) 2026-01-17 00:26:01.697186 | orchestrator | changed: [testbed-node-1] => (item=export LANGUAGE=C.UTF-8) 2026-01-17 00:26:01.697197 | orchestrator | changed: [testbed-node-5] => (item=export LANGUAGE=C.UTF-8) 2026-01-17 00:26:01.697207 | orchestrator | changed: [testbed-node-4] => (item=export LANGUAGE=C.UTF-8) 2026-01-17 00:26:01.697218 | orchestrator | changed: [testbed-node-2] => (item=export LANGUAGE=C.UTF-8) 2026-01-17 00:26:01.697229 | orchestrator | changed: [testbed-node-0] => (item=export LANG=C.UTF-8) 2026-01-17 00:26:01.697239 | orchestrator | changed: [testbed-node-1] => (item=export LANG=C.UTF-8) 2026-01-17 00:26:01.697250 | orchestrator | changed: [testbed-node-5] => (item=export LANG=C.UTF-8) 2026-01-17 00:26:01.697261 | orchestrator | changed: [testbed-node-3] => (item=export LANG=C.UTF-8) 2026-01-17 00:26:01.697271 | orchestrator | changed: [testbed-node-2] => (item=export LANG=C.UTF-8) 2026-01-17 00:26:01.697282 | orchestrator | changed: [testbed-node-4] => (item=export LANG=C.UTF-8) 2026-01-17 00:26:01.697293 | orchestrator | changed: [testbed-node-5] => (item=export LC_ALL=C.UTF-8) 2026-01-17 00:26:01.697303 | orchestrator | changed: [testbed-node-1] => (item=export LC_ALL=C.UTF-8) 2026-01-17 00:26:01.697314 | orchestrator | changed: [testbed-node-0] => (item=export LC_ALL=C.UTF-8) 2026-01-17 00:26:01.697325 | orchestrator | changed: [testbed-node-3] => (item=export LC_ALL=C.UTF-8) 2026-01-17 00:26:01.697335 | orchestrator | changed: [testbed-node-2] => (item=export LC_ALL=C.UTF-8) 2026-01-17 00:26:01.697346 | orchestrator | changed: [testbed-node-4] => (item=export LC_ALL=C.UTF-8) 2026-01-17 00:26:01.697356 | orchestrator | 2026-01-17 00:26:01.697367 | orchestrator | TASK [osism.commons.operator : Set custom environment variables in .bashrc configuration file] *** 2026-01-17 00:26:01.697379 | orchestrator | Saturday 17 January 2026 00:25:59 +0000 (0:00:01.233) 0:00:09.850 ****** 2026-01-17 00:26:01.697389 | orchestrator | skipping: [testbed-node-0] 2026-01-17 00:26:01.697400 | orchestrator | skipping: [testbed-node-1] 2026-01-17 00:26:01.697411 | orchestrator | skipping: [testbed-node-2] 2026-01-17 00:26:01.697421 | orchestrator | skipping: [testbed-node-3] 2026-01-17 00:26:01.697432 | orchestrator | skipping: [testbed-node-4] 2026-01-17 00:26:01.697442 | orchestrator | skipping: [testbed-node-5] 2026-01-17 00:26:01.697453 | orchestrator | 2026-01-17 00:26:01.697471 | orchestrator | TASK [osism.commons.operator : Set custom PS1 prompt in .bashrc configuration file] *** 2026-01-17 00:26:01.697482 | orchestrator | Saturday 17 January 2026 00:25:59 +0000 (0:00:00.125) 0:00:09.976 ****** 2026-01-17 00:26:01.697493 | orchestrator | skipping: [testbed-node-0] 2026-01-17 00:26:01.697503 | orchestrator | skipping: [testbed-node-1] 2026-01-17 00:26:01.697514 | orchestrator | skipping: [testbed-node-2] 2026-01-17 00:26:01.697524 | orchestrator | skipping: [testbed-node-3] 2026-01-17 00:26:01.697544 | orchestrator | skipping: [testbed-node-4] 2026-01-17 00:26:01.697555 | orchestrator | skipping: [testbed-node-5] 2026-01-17 00:26:01.697566 | orchestrator | 2026-01-17 00:26:01.697576 | orchestrator | TASK [osism.commons.operator : Create .ssh directory] ************************** 2026-01-17 00:26:01.697587 | orchestrator | Saturday 17 January 2026 00:25:59 +0000 (0:00:00.152) 0:00:10.129 ****** 2026-01-17 00:26:01.697598 | orchestrator | changed: [testbed-node-3] 2026-01-17 00:26:01.697608 | orchestrator | changed: [testbed-node-4] 2026-01-17 00:26:01.697619 | orchestrator | changed: [testbed-node-1] 2026-01-17 00:26:01.697630 | orchestrator | changed: [testbed-node-5] 2026-01-17 00:26:01.697640 | orchestrator | changed: [testbed-node-2] 2026-01-17 00:26:01.697651 | orchestrator | changed: [testbed-node-0] 2026-01-17 00:26:01.697661 | orchestrator | 2026-01-17 00:26:01.697672 | orchestrator | TASK [osism.commons.operator : Check number of SSH authorized keys] ************ 2026-01-17 00:26:01.697683 | orchestrator | Saturday 17 January 2026 00:26:00 +0000 (0:00:00.553) 0:00:10.683 ****** 2026-01-17 00:26:01.697693 | orchestrator | skipping: [testbed-node-0] 2026-01-17 00:26:01.697704 | orchestrator | skipping: [testbed-node-1] 2026-01-17 00:26:01.697714 | orchestrator | skipping: [testbed-node-2] 2026-01-17 00:26:01.697725 | orchestrator | skipping: [testbed-node-3] 2026-01-17 00:26:01.697735 | orchestrator | skipping: [testbed-node-4] 2026-01-17 00:26:01.697746 | orchestrator | skipping: [testbed-node-5] 2026-01-17 00:26:01.697756 | orchestrator | 2026-01-17 00:26:01.697767 | orchestrator | TASK [osism.commons.operator : Set ssh authorized keys] ************************ 2026-01-17 00:26:01.697778 | orchestrator | Saturday 17 January 2026 00:26:00 +0000 (0:00:00.151) 0:00:10.834 ****** 2026-01-17 00:26:01.697789 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-01-17 00:26:01.697800 | orchestrator | changed: [testbed-node-1] => (item=None) 2026-01-17 00:26:01.697811 | orchestrator | changed: [testbed-node-0] 2026-01-17 00:26:01.697821 | orchestrator | changed: [testbed-node-1] 2026-01-17 00:26:01.697832 | orchestrator | changed: [testbed-node-2] => (item=None) 2026-01-17 00:26:01.697843 | orchestrator | changed: [testbed-node-2] 2026-01-17 00:26:01.697854 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-01-17 00:26:01.697864 | orchestrator | changed: [testbed-node-3] 2026-01-17 00:26:01.697875 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-01-17 00:26:01.697885 | orchestrator | changed: [testbed-node-5] 2026-01-17 00:26:01.697896 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-01-17 00:26:01.697906 | orchestrator | changed: [testbed-node-4] 2026-01-17 00:26:01.697917 | orchestrator | 2026-01-17 00:26:01.697928 | orchestrator | TASK [osism.commons.operator : Delete ssh authorized keys] ********************* 2026-01-17 00:26:01.697938 | orchestrator | Saturday 17 January 2026 00:26:01 +0000 (0:00:00.699) 0:00:11.533 ****** 2026-01-17 00:26:01.697949 | orchestrator | skipping: [testbed-node-0] 2026-01-17 00:26:01.697959 | orchestrator | skipping: [testbed-node-1] 2026-01-17 00:26:01.697970 | orchestrator | skipping: [testbed-node-2] 2026-01-17 00:26:01.697981 | orchestrator | skipping: [testbed-node-3] 2026-01-17 00:26:01.698014 | orchestrator | skipping: [testbed-node-4] 2026-01-17 00:26:01.698088 | orchestrator | skipping: [testbed-node-5] 2026-01-17 00:26:01.698099 | orchestrator | 2026-01-17 00:26:01.698110 | orchestrator | TASK [osism.commons.operator : Set authorized GitHub accounts] ***************** 2026-01-17 00:26:01.698121 | orchestrator | Saturday 17 January 2026 00:26:01 +0000 (0:00:00.144) 0:00:11.678 ****** 2026-01-17 00:26:01.698131 | orchestrator | skipping: [testbed-node-0] 2026-01-17 00:26:01.698142 | orchestrator | skipping: [testbed-node-1] 2026-01-17 00:26:01.698152 | orchestrator | skipping: [testbed-node-2] 2026-01-17 00:26:01.698174 | orchestrator | skipping: [testbed-node-3] 2026-01-17 00:26:01.698194 | orchestrator | skipping: [testbed-node-4] 2026-01-17 00:26:02.853127 | orchestrator | skipping: [testbed-node-5] 2026-01-17 00:26:02.853238 | orchestrator | 2026-01-17 00:26:02.853254 | orchestrator | TASK [osism.commons.operator : Delete authorized GitHub accounts] ************** 2026-01-17 00:26:02.853268 | orchestrator | Saturday 17 January 2026 00:26:01 +0000 (0:00:00.164) 0:00:11.843 ****** 2026-01-17 00:26:02.853307 | orchestrator | skipping: [testbed-node-0] 2026-01-17 00:26:02.853318 | orchestrator | skipping: [testbed-node-1] 2026-01-17 00:26:02.853329 | orchestrator | skipping: [testbed-node-2] 2026-01-17 00:26:02.853345 | orchestrator | skipping: [testbed-node-3] 2026-01-17 00:26:02.853362 | orchestrator | skipping: [testbed-node-4] 2026-01-17 00:26:02.853380 | orchestrator | skipping: [testbed-node-5] 2026-01-17 00:26:02.853396 | orchestrator | 2026-01-17 00:26:02.853411 | orchestrator | TASK [osism.commons.operator : Set password] *********************************** 2026-01-17 00:26:02.853430 | orchestrator | Saturday 17 January 2026 00:26:01 +0000 (0:00:00.135) 0:00:11.979 ****** 2026-01-17 00:26:02.853450 | orchestrator | changed: [testbed-node-1] 2026-01-17 00:26:02.853471 | orchestrator | changed: [testbed-node-0] 2026-01-17 00:26:02.853492 | orchestrator | changed: [testbed-node-3] 2026-01-17 00:26:02.853506 | orchestrator | changed: [testbed-node-4] 2026-01-17 00:26:02.853516 | orchestrator | changed: [testbed-node-2] 2026-01-17 00:26:02.853527 | orchestrator | changed: [testbed-node-5] 2026-01-17 00:26:02.853538 | orchestrator | 2026-01-17 00:26:02.853548 | orchestrator | TASK [osism.commons.operator : Unset & lock password] ************************** 2026-01-17 00:26:02.853559 | orchestrator | Saturday 17 January 2026 00:26:02 +0000 (0:00:00.638) 0:00:12.617 ****** 2026-01-17 00:26:02.853569 | orchestrator | skipping: [testbed-node-0] 2026-01-17 00:26:02.853580 | orchestrator | skipping: [testbed-node-1] 2026-01-17 00:26:02.853591 | orchestrator | skipping: [testbed-node-2] 2026-01-17 00:26:02.853601 | orchestrator | skipping: [testbed-node-3] 2026-01-17 00:26:02.853612 | orchestrator | skipping: [testbed-node-4] 2026-01-17 00:26:02.853623 | orchestrator | skipping: [testbed-node-5] 2026-01-17 00:26:02.853633 | orchestrator | 2026-01-17 00:26:02.853646 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-17 00:26:02.853659 | orchestrator | testbed-node-0 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-01-17 00:26:02.853673 | orchestrator | testbed-node-1 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-01-17 00:26:02.853686 | orchestrator | testbed-node-2 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-01-17 00:26:02.853699 | orchestrator | testbed-node-3 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-01-17 00:26:02.853711 | orchestrator | testbed-node-4 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-01-17 00:26:02.853724 | orchestrator | testbed-node-5 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-01-17 00:26:02.853736 | orchestrator | 2026-01-17 00:26:02.853749 | orchestrator | 2026-01-17 00:26:02.853761 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-17 00:26:02.853774 | orchestrator | Saturday 17 January 2026 00:26:02 +0000 (0:00:00.204) 0:00:12.822 ****** 2026-01-17 00:26:02.853787 | orchestrator | =============================================================================== 2026-01-17 00:26:02.853799 | orchestrator | Gathering Facts --------------------------------------------------------- 3.30s 2026-01-17 00:26:02.853809 | orchestrator | osism.commons.operator : Add user to additional groups ------------------ 1.27s 2026-01-17 00:26:02.853820 | orchestrator | osism.commons.operator : Set language variables in .bashrc configuration file --- 1.23s 2026-01-17 00:26:02.853831 | orchestrator | osism.commons.operator : Copy user sudoers file ------------------------- 1.20s 2026-01-17 00:26:02.853842 | orchestrator | osism.commons.operator : Create user ------------------------------------ 0.88s 2026-01-17 00:26:02.853853 | orchestrator | Do not require tty for all users ---------------------------------------- 0.86s 2026-01-17 00:26:02.853863 | orchestrator | osism.commons.operator : Set ssh authorized keys ------------------------ 0.70s 2026-01-17 00:26:02.853899 | orchestrator | osism.commons.operator : Set password ----------------------------------- 0.64s 2026-01-17 00:26:02.853911 | orchestrator | osism.commons.operator : Create operator group -------------------------- 0.61s 2026-01-17 00:26:02.853921 | orchestrator | osism.commons.operator : Create .ssh directory -------------------------- 0.55s 2026-01-17 00:26:02.853932 | orchestrator | osism.commons.operator : Unset & lock password -------------------------- 0.20s 2026-01-17 00:26:02.853942 | orchestrator | osism.commons.operator : Set operator_groups variable to default value --- 0.19s 2026-01-17 00:26:02.853953 | orchestrator | osism.commons.operator : Gather variables for each operating system ----- 0.17s 2026-01-17 00:26:02.853964 | orchestrator | osism.commons.operator : Set authorized GitHub accounts ----------------- 0.16s 2026-01-17 00:26:02.853975 | orchestrator | osism.commons.operator : Set custom PS1 prompt in .bashrc configuration file --- 0.15s 2026-01-17 00:26:02.853985 | orchestrator | osism.commons.operator : Check number of SSH authorized keys ------------ 0.15s 2026-01-17 00:26:02.854085 | orchestrator | osism.commons.operator : Delete ssh authorized keys --------------------- 0.14s 2026-01-17 00:26:02.854096 | orchestrator | osism.commons.operator : Delete authorized GitHub accounts -------------- 0.14s 2026-01-17 00:26:02.854108 | orchestrator | osism.commons.operator : Set custom environment variables in .bashrc configuration file --- 0.13s 2026-01-17 00:26:03.163527 | orchestrator | + osism apply --environment custom facts 2026-01-17 00:26:05.105535 | orchestrator | 2026-01-17 00:26:05 | INFO  | Trying to run play facts in environment custom 2026-01-17 00:26:15.330178 | orchestrator | 2026-01-17 00:26:15 | INFO  | Task a377f6b6-f9ef-4f3e-8e7e-797f526d9699 (facts) was prepared for execution. 2026-01-17 00:26:15.330281 | orchestrator | 2026-01-17 00:26:15 | INFO  | It takes a moment until task a377f6b6-f9ef-4f3e-8e7e-797f526d9699 (facts) has been started and output is visible here. 2026-01-17 00:27:00.511481 | orchestrator | 2026-01-17 00:27:00.511595 | orchestrator | PLAY [Copy custom network devices fact] **************************************** 2026-01-17 00:27:00.511612 | orchestrator | 2026-01-17 00:27:00.511624 | orchestrator | TASK [Create custom facts directory] ******************************************* 2026-01-17 00:27:00.511636 | orchestrator | Saturday 17 January 2026 00:26:19 +0000 (0:00:00.085) 0:00:00.085 ****** 2026-01-17 00:27:00.511647 | orchestrator | ok: [testbed-manager] 2026-01-17 00:27:00.511660 | orchestrator | changed: [testbed-node-5] 2026-01-17 00:27:00.511672 | orchestrator | changed: [testbed-node-0] 2026-01-17 00:27:00.511683 | orchestrator | changed: [testbed-node-3] 2026-01-17 00:27:00.511694 | orchestrator | changed: [testbed-node-2] 2026-01-17 00:27:00.511705 | orchestrator | changed: [testbed-node-4] 2026-01-17 00:27:00.511717 | orchestrator | changed: [testbed-node-1] 2026-01-17 00:27:00.511728 | orchestrator | 2026-01-17 00:27:00.511739 | orchestrator | TASK [Copy fact file] ********************************************************** 2026-01-17 00:27:00.511751 | orchestrator | Saturday 17 January 2026 00:26:20 +0000 (0:00:01.365) 0:00:01.450 ****** 2026-01-17 00:27:00.511762 | orchestrator | ok: [testbed-manager] 2026-01-17 00:27:00.511773 | orchestrator | changed: [testbed-node-1] 2026-01-17 00:27:00.511784 | orchestrator | changed: [testbed-node-3] 2026-01-17 00:27:00.511795 | orchestrator | changed: [testbed-node-2] 2026-01-17 00:27:00.511806 | orchestrator | changed: [testbed-node-4] 2026-01-17 00:27:00.511817 | orchestrator | changed: [testbed-node-0] 2026-01-17 00:27:00.511829 | orchestrator | changed: [testbed-node-5] 2026-01-17 00:27:00.511841 | orchestrator | 2026-01-17 00:27:00.511852 | orchestrator | PLAY [Copy custom ceph devices facts] ****************************************** 2026-01-17 00:27:00.511863 | orchestrator | 2026-01-17 00:27:00.511874 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2026-01-17 00:27:00.511901 | orchestrator | Saturday 17 January 2026 00:26:22 +0000 (0:00:01.219) 0:00:02.670 ****** 2026-01-17 00:27:00.511913 | orchestrator | ok: [testbed-node-3] 2026-01-17 00:27:00.511924 | orchestrator | ok: [testbed-node-4] 2026-01-17 00:27:00.511935 | orchestrator | ok: [testbed-node-5] 2026-01-17 00:27:00.512015 | orchestrator | 2026-01-17 00:27:00.512029 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2026-01-17 00:27:00.512043 | orchestrator | Saturday 17 January 2026 00:26:22 +0000 (0:00:00.105) 0:00:02.775 ****** 2026-01-17 00:27:00.512056 | orchestrator | ok: [testbed-node-3] 2026-01-17 00:27:00.512069 | orchestrator | ok: [testbed-node-4] 2026-01-17 00:27:00.512081 | orchestrator | ok: [testbed-node-5] 2026-01-17 00:27:00.512093 | orchestrator | 2026-01-17 00:27:00.512106 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2026-01-17 00:27:00.512118 | orchestrator | Saturday 17 January 2026 00:26:22 +0000 (0:00:00.214) 0:00:02.990 ****** 2026-01-17 00:27:00.512131 | orchestrator | ok: [testbed-node-3] 2026-01-17 00:27:00.512144 | orchestrator | ok: [testbed-node-4] 2026-01-17 00:27:00.512157 | orchestrator | ok: [testbed-node-5] 2026-01-17 00:27:00.512169 | orchestrator | 2026-01-17 00:27:00.512182 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2026-01-17 00:27:00.512194 | orchestrator | Saturday 17 January 2026 00:26:22 +0000 (0:00:00.224) 0:00:03.214 ****** 2026-01-17 00:27:00.512208 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-17 00:27:00.512222 | orchestrator | 2026-01-17 00:27:00.512235 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2026-01-17 00:27:00.512247 | orchestrator | Saturday 17 January 2026 00:26:22 +0000 (0:00:00.157) 0:00:03.371 ****** 2026-01-17 00:27:00.512260 | orchestrator | ok: [testbed-node-3] 2026-01-17 00:27:00.512272 | orchestrator | ok: [testbed-node-4] 2026-01-17 00:27:00.512285 | orchestrator | ok: [testbed-node-5] 2026-01-17 00:27:00.512298 | orchestrator | 2026-01-17 00:27:00.512310 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2026-01-17 00:27:00.512323 | orchestrator | Saturday 17 January 2026 00:26:23 +0000 (0:00:00.446) 0:00:03.817 ****** 2026-01-17 00:27:00.512336 | orchestrator | skipping: [testbed-node-3] 2026-01-17 00:27:00.512349 | orchestrator | skipping: [testbed-node-4] 2026-01-17 00:27:00.512361 | orchestrator | skipping: [testbed-node-5] 2026-01-17 00:27:00.512374 | orchestrator | 2026-01-17 00:27:00.512386 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2026-01-17 00:27:00.512398 | orchestrator | Saturday 17 January 2026 00:26:23 +0000 (0:00:00.143) 0:00:03.961 ****** 2026-01-17 00:27:00.512411 | orchestrator | changed: [testbed-node-3] 2026-01-17 00:27:00.512422 | orchestrator | changed: [testbed-node-4] 2026-01-17 00:27:00.512433 | orchestrator | changed: [testbed-node-5] 2026-01-17 00:27:00.512444 | orchestrator | 2026-01-17 00:27:00.512455 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2026-01-17 00:27:00.512466 | orchestrator | Saturday 17 January 2026 00:26:24 +0000 (0:00:01.068) 0:00:05.030 ****** 2026-01-17 00:27:00.512476 | orchestrator | ok: [testbed-node-3] 2026-01-17 00:27:00.512487 | orchestrator | ok: [testbed-node-4] 2026-01-17 00:27:00.512498 | orchestrator | ok: [testbed-node-5] 2026-01-17 00:27:00.512509 | orchestrator | 2026-01-17 00:27:00.512519 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2026-01-17 00:27:00.512530 | orchestrator | Saturday 17 January 2026 00:26:24 +0000 (0:00:00.498) 0:00:05.529 ****** 2026-01-17 00:27:00.512541 | orchestrator | changed: [testbed-node-3] 2026-01-17 00:27:00.512552 | orchestrator | changed: [testbed-node-4] 2026-01-17 00:27:00.512563 | orchestrator | changed: [testbed-node-5] 2026-01-17 00:27:00.512574 | orchestrator | 2026-01-17 00:27:00.512585 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2026-01-17 00:27:00.512595 | orchestrator | Saturday 17 January 2026 00:26:26 +0000 (0:00:01.064) 0:00:06.594 ****** 2026-01-17 00:27:00.512606 | orchestrator | changed: [testbed-node-4] 2026-01-17 00:27:00.512617 | orchestrator | changed: [testbed-node-5] 2026-01-17 00:27:00.512628 | orchestrator | changed: [testbed-node-3] 2026-01-17 00:27:00.512639 | orchestrator | 2026-01-17 00:27:00.512649 | orchestrator | TASK [Install required packages (RedHat)] ************************************** 2026-01-17 00:27:00.512669 | orchestrator | Saturday 17 January 2026 00:26:42 +0000 (0:00:16.697) 0:00:23.292 ****** 2026-01-17 00:27:00.512680 | orchestrator | skipping: [testbed-node-3] 2026-01-17 00:27:00.512691 | orchestrator | skipping: [testbed-node-4] 2026-01-17 00:27:00.512702 | orchestrator | skipping: [testbed-node-5] 2026-01-17 00:27:00.512712 | orchestrator | 2026-01-17 00:27:00.512723 | orchestrator | TASK [Install required packages (Debian)] ************************************** 2026-01-17 00:27:00.512753 | orchestrator | Saturday 17 January 2026 00:26:42 +0000 (0:00:00.090) 0:00:23.382 ****** 2026-01-17 00:27:00.512764 | orchestrator | changed: [testbed-node-4] 2026-01-17 00:27:00.512775 | orchestrator | changed: [testbed-node-3] 2026-01-17 00:27:00.512786 | orchestrator | changed: [testbed-node-5] 2026-01-17 00:27:00.512797 | orchestrator | 2026-01-17 00:27:00.512808 | orchestrator | TASK [Create custom facts directory] ******************************************* 2026-01-17 00:27:00.512819 | orchestrator | Saturday 17 January 2026 00:26:51 +0000 (0:00:08.367) 0:00:31.750 ****** 2026-01-17 00:27:00.512829 | orchestrator | ok: [testbed-node-3] 2026-01-17 00:27:00.512840 | orchestrator | ok: [testbed-node-4] 2026-01-17 00:27:00.512851 | orchestrator | ok: [testbed-node-5] 2026-01-17 00:27:00.512861 | orchestrator | 2026-01-17 00:27:00.512873 | orchestrator | TASK [Copy fact files] ********************************************************* 2026-01-17 00:27:00.512883 | orchestrator | Saturday 17 January 2026 00:26:51 +0000 (0:00:00.455) 0:00:32.206 ****** 2026-01-17 00:27:00.512894 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_devices) 2026-01-17 00:27:00.512905 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_devices) 2026-01-17 00:27:00.512916 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_devices) 2026-01-17 00:27:00.512927 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_devices_all) 2026-01-17 00:27:00.512938 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_devices_all) 2026-01-17 00:27:00.512974 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_devices_all) 2026-01-17 00:27:00.512985 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_osd_devices) 2026-01-17 00:27:00.512996 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_osd_devices) 2026-01-17 00:27:00.513007 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_osd_devices) 2026-01-17 00:27:00.513018 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_osd_devices_all) 2026-01-17 00:27:00.513028 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_osd_devices_all) 2026-01-17 00:27:00.513039 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_osd_devices_all) 2026-01-17 00:27:00.513050 | orchestrator | 2026-01-17 00:27:00.513060 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2026-01-17 00:27:00.513071 | orchestrator | Saturday 17 January 2026 00:26:55 +0000 (0:00:03.563) 0:00:35.770 ****** 2026-01-17 00:27:00.513082 | orchestrator | ok: [testbed-node-4] 2026-01-17 00:27:00.513093 | orchestrator | ok: [testbed-node-3] 2026-01-17 00:27:00.513103 | orchestrator | ok: [testbed-node-5] 2026-01-17 00:27:00.513114 | orchestrator | 2026-01-17 00:27:00.513125 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-01-17 00:27:00.513135 | orchestrator | 2026-01-17 00:27:00.513146 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-01-17 00:27:00.513157 | orchestrator | Saturday 17 January 2026 00:26:56 +0000 (0:00:01.410) 0:00:37.181 ****** 2026-01-17 00:27:00.513168 | orchestrator | ok: [testbed-node-1] 2026-01-17 00:27:00.513179 | orchestrator | ok: [testbed-node-0] 2026-01-17 00:27:00.513189 | orchestrator | ok: [testbed-node-2] 2026-01-17 00:27:00.513200 | orchestrator | ok: [testbed-manager] 2026-01-17 00:27:00.513211 | orchestrator | ok: [testbed-node-3] 2026-01-17 00:27:00.513222 | orchestrator | ok: [testbed-node-5] 2026-01-17 00:27:00.513232 | orchestrator | ok: [testbed-node-4] 2026-01-17 00:27:00.513243 | orchestrator | 2026-01-17 00:27:00.513254 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-17 00:27:00.513272 | orchestrator | testbed-manager : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-17 00:27:00.513284 | orchestrator | testbed-node-0 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-17 00:27:00.513296 | orchestrator | testbed-node-1 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-17 00:27:00.513307 | orchestrator | testbed-node-2 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-17 00:27:00.513318 | orchestrator | testbed-node-3 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-17 00:27:00.513329 | orchestrator | testbed-node-4 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-17 00:27:00.513340 | orchestrator | testbed-node-5 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-17 00:27:00.513351 | orchestrator | 2026-01-17 00:27:00.513362 | orchestrator | 2026-01-17 00:27:00.513373 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-17 00:27:00.513383 | orchestrator | Saturday 17 January 2026 00:27:00 +0000 (0:00:03.885) 0:00:41.066 ****** 2026-01-17 00:27:00.513394 | orchestrator | =============================================================================== 2026-01-17 00:27:00.513405 | orchestrator | osism.commons.repository : Update package cache ------------------------ 16.70s 2026-01-17 00:27:00.513416 | orchestrator | Install required packages (Debian) -------------------------------------- 8.37s 2026-01-17 00:27:00.513426 | orchestrator | Gathers facts about hosts ----------------------------------------------- 3.89s 2026-01-17 00:27:00.513437 | orchestrator | Copy fact files --------------------------------------------------------- 3.56s 2026-01-17 00:27:00.513486 | orchestrator | osism.commons.repository : Force update of package cache ---------------- 1.41s 2026-01-17 00:27:00.513498 | orchestrator | Create custom facts directory ------------------------------------------- 1.37s 2026-01-17 00:27:00.513516 | orchestrator | Copy fact file ---------------------------------------------------------- 1.22s 2026-01-17 00:27:00.906836 | orchestrator | osism.commons.repository : Copy 99osism apt configuration --------------- 1.07s 2026-01-17 00:27:00.906907 | orchestrator | osism.commons.repository : Copy ubuntu.sources file --------------------- 1.06s 2026-01-17 00:27:00.906913 | orchestrator | osism.commons.repository : Remove sources.list file --------------------- 0.50s 2026-01-17 00:27:00.906918 | orchestrator | Create custom facts directory ------------------------------------------- 0.46s 2026-01-17 00:27:00.906922 | orchestrator | osism.commons.repository : Create /etc/apt/sources.list.d directory ----- 0.45s 2026-01-17 00:27:00.906926 | orchestrator | osism.commons.repository : Set repositories to default ------------------ 0.22s 2026-01-17 00:27:00.906930 | orchestrator | osism.commons.repository : Set repository_default fact to default value --- 0.21s 2026-01-17 00:27:00.906935 | orchestrator | osism.commons.repository : Include distribution specific repository tasks --- 0.16s 2026-01-17 00:27:00.906972 | orchestrator | osism.commons.repository : Include tasks for Ubuntu < 24.04 ------------- 0.14s 2026-01-17 00:27:00.906976 | orchestrator | osism.commons.repository : Gather variables for each operating system --- 0.11s 2026-01-17 00:27:00.906980 | orchestrator | Install required packages (RedHat) -------------------------------------- 0.09s 2026-01-17 00:27:01.235149 | orchestrator | + osism apply bootstrap 2026-01-17 00:27:13.363700 | orchestrator | 2026-01-17 00:27:13 | INFO  | Task e8ba6a1a-3127-42a0-a8ee-58fe5e88feb8 (bootstrap) was prepared for execution. 2026-01-17 00:27:13.363873 | orchestrator | 2026-01-17 00:27:13 | INFO  | It takes a moment until task e8ba6a1a-3127-42a0-a8ee-58fe5e88feb8 (bootstrap) has been started and output is visible here. 2026-01-17 00:27:30.735697 | orchestrator | 2026-01-17 00:27:30.735835 | orchestrator | PLAY [Group hosts based on state bootstrap] ************************************ 2026-01-17 00:27:30.735854 | orchestrator | 2026-01-17 00:27:30.735867 | orchestrator | TASK [Group hosts based on state bootstrap] ************************************ 2026-01-17 00:27:30.735878 | orchestrator | Saturday 17 January 2026 00:27:17 +0000 (0:00:00.170) 0:00:00.170 ****** 2026-01-17 00:27:30.735889 | orchestrator | ok: [testbed-manager] 2026-01-17 00:27:30.735901 | orchestrator | ok: [testbed-node-3] 2026-01-17 00:27:30.735912 | orchestrator | ok: [testbed-node-4] 2026-01-17 00:27:30.735981 | orchestrator | ok: [testbed-node-5] 2026-01-17 00:27:30.735993 | orchestrator | ok: [testbed-node-0] 2026-01-17 00:27:30.736003 | orchestrator | ok: [testbed-node-1] 2026-01-17 00:27:30.736014 | orchestrator | ok: [testbed-node-2] 2026-01-17 00:27:30.736025 | orchestrator | 2026-01-17 00:27:30.736036 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-01-17 00:27:30.736047 | orchestrator | 2026-01-17 00:27:30.736057 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-01-17 00:27:30.736068 | orchestrator | Saturday 17 January 2026 00:27:18 +0000 (0:00:00.258) 0:00:00.428 ****** 2026-01-17 00:27:30.736079 | orchestrator | ok: [testbed-node-1] 2026-01-17 00:27:30.736090 | orchestrator | ok: [testbed-node-0] 2026-01-17 00:27:30.736100 | orchestrator | ok: [testbed-node-2] 2026-01-17 00:27:30.736111 | orchestrator | ok: [testbed-manager] 2026-01-17 00:27:30.736122 | orchestrator | ok: [testbed-node-5] 2026-01-17 00:27:30.736133 | orchestrator | ok: [testbed-node-3] 2026-01-17 00:27:30.736143 | orchestrator | ok: [testbed-node-4] 2026-01-17 00:27:30.736153 | orchestrator | 2026-01-17 00:27:30.736164 | orchestrator | PLAY [Gather facts for all hosts (if using --limit)] *************************** 2026-01-17 00:27:30.736175 | orchestrator | 2026-01-17 00:27:30.736185 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-01-17 00:27:30.736196 | orchestrator | Saturday 17 January 2026 00:27:22 +0000 (0:00:04.563) 0:00:04.991 ****** 2026-01-17 00:27:30.736208 | orchestrator | skipping: [testbed-manager] => (item=testbed-manager)  2026-01-17 00:27:30.736219 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-3)  2026-01-17 00:27:30.736230 | orchestrator | skipping: [testbed-node-3] => (item=testbed-manager)  2026-01-17 00:27:30.736240 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-4)  2026-01-17 00:27:30.736251 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-01-17 00:27:30.736262 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-5)  2026-01-17 00:27:30.736273 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-01-17 00:27:30.736283 | orchestrator | skipping: [testbed-node-4] => (item=testbed-manager)  2026-01-17 00:27:30.736294 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-01-17 00:27:30.736304 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2026-01-17 00:27:30.736315 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2026-01-17 00:27:30.736326 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-01-17 00:27:30.736337 | orchestrator | skipping: [testbed-node-5] => (item=testbed-manager)  2026-01-17 00:27:30.736347 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2026-01-17 00:27:30.736358 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-1)  2026-01-17 00:27:30.736368 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-01-17 00:27:30.736379 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2026-01-17 00:27:30.736390 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2026-01-17 00:27:30.736400 | orchestrator | skipping: [testbed-node-0] => (item=testbed-manager)  2026-01-17 00:27:30.736411 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2026-01-17 00:27:30.736421 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-2)  2026-01-17 00:27:30.736432 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2026-01-17 00:27:30.736443 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-01-17 00:27:30.736462 | orchestrator | skipping: [testbed-manager] 2026-01-17 00:27:30.736473 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2026-01-17 00:27:30.736484 | orchestrator | skipping: [testbed-node-3] 2026-01-17 00:27:30.736494 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2026-01-17 00:27:30.736505 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2026-01-17 00:27:30.736516 | orchestrator | skipping: [testbed-node-1] => (item=testbed-manager)  2026-01-17 00:27:30.736526 | orchestrator | skipping: [testbed-node-4] 2026-01-17 00:27:30.736537 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2026-01-17 00:27:30.736548 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2026-01-17 00:27:30.736558 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2026-01-17 00:27:30.736569 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2026-01-17 00:27:30.736579 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2026-01-17 00:27:30.736590 | orchestrator | skipping: [testbed-node-2] => (item=testbed-manager)  2026-01-17 00:27:30.736601 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2026-01-17 00:27:30.736611 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2026-01-17 00:27:30.736622 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2026-01-17 00:27:30.736633 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2026-01-17 00:27:30.736643 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-01-17 00:27:30.736654 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2026-01-17 00:27:30.736664 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2026-01-17 00:27:30.736691 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2026-01-17 00:27:30.736705 | orchestrator | skipping: [testbed-node-5] 2026-01-17 00:27:30.736724 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-01-17 00:27:30.736794 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2026-01-17 00:27:30.736814 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2026-01-17 00:27:30.736831 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2026-01-17 00:27:30.736849 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-01-17 00:27:30.736867 | orchestrator | skipping: [testbed-node-1] 2026-01-17 00:27:30.736887 | orchestrator | skipping: [testbed-node-0] 2026-01-17 00:27:30.736905 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2026-01-17 00:27:30.736983 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2026-01-17 00:27:30.736995 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2026-01-17 00:27:30.737006 | orchestrator | skipping: [testbed-node-2] 2026-01-17 00:27:30.737017 | orchestrator | 2026-01-17 00:27:30.737027 | orchestrator | PLAY [Apply bootstrap roles part 1] ******************************************** 2026-01-17 00:27:30.737038 | orchestrator | 2026-01-17 00:27:30.737049 | orchestrator | TASK [osism.commons.hostname : Set hostname] *********************************** 2026-01-17 00:27:30.737059 | orchestrator | Saturday 17 January 2026 00:27:23 +0000 (0:00:00.473) 0:00:05.465 ****** 2026-01-17 00:27:30.737070 | orchestrator | ok: [testbed-manager] 2026-01-17 00:27:30.737080 | orchestrator | ok: [testbed-node-0] 2026-01-17 00:27:30.737091 | orchestrator | ok: [testbed-node-1] 2026-01-17 00:27:30.737101 | orchestrator | ok: [testbed-node-4] 2026-01-17 00:27:30.737112 | orchestrator | ok: [testbed-node-5] 2026-01-17 00:27:30.737125 | orchestrator | ok: [testbed-node-3] 2026-01-17 00:27:30.737144 | orchestrator | ok: [testbed-node-2] 2026-01-17 00:27:30.737162 | orchestrator | 2026-01-17 00:27:30.737180 | orchestrator | TASK [osism.commons.hostname : Copy /etc/hostname] ***************************** 2026-01-17 00:27:30.737198 | orchestrator | Saturday 17 January 2026 00:27:24 +0000 (0:00:01.205) 0:00:06.671 ****** 2026-01-17 00:27:30.737216 | orchestrator | ok: [testbed-manager] 2026-01-17 00:27:30.737235 | orchestrator | ok: [testbed-node-4] 2026-01-17 00:27:30.737268 | orchestrator | ok: [testbed-node-0] 2026-01-17 00:27:30.737286 | orchestrator | ok: [testbed-node-3] 2026-01-17 00:27:30.737301 | orchestrator | ok: [testbed-node-1] 2026-01-17 00:27:30.737311 | orchestrator | ok: [testbed-node-5] 2026-01-17 00:27:30.737322 | orchestrator | ok: [testbed-node-2] 2026-01-17 00:27:30.737333 | orchestrator | 2026-01-17 00:27:30.737344 | orchestrator | TASK [osism.commons.hosts : Include type specific tasks] *********************** 2026-01-17 00:27:30.737354 | orchestrator | Saturday 17 January 2026 00:27:25 +0000 (0:00:01.290) 0:00:07.962 ****** 2026-01-17 00:27:30.737366 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/hosts/tasks/type-template.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-17 00:27:30.737379 | orchestrator | 2026-01-17 00:27:30.737390 | orchestrator | TASK [osism.commons.hosts : Copy /etc/hosts file] ****************************** 2026-01-17 00:27:30.737408 | orchestrator | Saturday 17 January 2026 00:27:25 +0000 (0:00:00.287) 0:00:08.250 ****** 2026-01-17 00:27:30.737426 | orchestrator | changed: [testbed-manager] 2026-01-17 00:27:30.737445 | orchestrator | changed: [testbed-node-4] 2026-01-17 00:27:30.737463 | orchestrator | changed: [testbed-node-1] 2026-01-17 00:27:30.737480 | orchestrator | changed: [testbed-node-0] 2026-01-17 00:27:30.737498 | orchestrator | changed: [testbed-node-2] 2026-01-17 00:27:30.737517 | orchestrator | changed: [testbed-node-3] 2026-01-17 00:27:30.737536 | orchestrator | changed: [testbed-node-5] 2026-01-17 00:27:30.737554 | orchestrator | 2026-01-17 00:27:30.737573 | orchestrator | TASK [osism.commons.proxy : Include distribution specific tasks] *************** 2026-01-17 00:27:30.737591 | orchestrator | Saturday 17 January 2026 00:27:28 +0000 (0:00:02.291) 0:00:10.541 ****** 2026-01-17 00:27:30.737610 | orchestrator | skipping: [testbed-manager] 2026-01-17 00:27:30.737630 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/proxy/tasks/Debian-family.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-17 00:27:30.737650 | orchestrator | 2026-01-17 00:27:30.737669 | orchestrator | TASK [osism.commons.proxy : Configure proxy parameters for apt] **************** 2026-01-17 00:27:30.737688 | orchestrator | Saturday 17 January 2026 00:27:28 +0000 (0:00:00.293) 0:00:10.835 ****** 2026-01-17 00:27:30.737706 | orchestrator | changed: [testbed-node-4] 2026-01-17 00:27:30.737728 | orchestrator | changed: [testbed-node-0] 2026-01-17 00:27:30.737746 | orchestrator | changed: [testbed-node-5] 2026-01-17 00:27:30.737765 | orchestrator | changed: [testbed-node-1] 2026-01-17 00:27:30.737784 | orchestrator | changed: [testbed-node-3] 2026-01-17 00:27:30.737803 | orchestrator | changed: [testbed-node-2] 2026-01-17 00:27:30.737822 | orchestrator | 2026-01-17 00:27:30.737840 | orchestrator | TASK [osism.commons.proxy : Set system wide settings in environment file] ****** 2026-01-17 00:27:30.737859 | orchestrator | Saturday 17 January 2026 00:27:29 +0000 (0:00:01.104) 0:00:11.940 ****** 2026-01-17 00:27:30.737876 | orchestrator | skipping: [testbed-manager] 2026-01-17 00:27:30.737894 | orchestrator | changed: [testbed-node-0] 2026-01-17 00:27:30.737912 | orchestrator | changed: [testbed-node-1] 2026-01-17 00:27:30.737956 | orchestrator | changed: [testbed-node-4] 2026-01-17 00:27:30.737973 | orchestrator | changed: [testbed-node-2] 2026-01-17 00:27:30.737989 | orchestrator | changed: [testbed-node-3] 2026-01-17 00:27:30.738005 | orchestrator | changed: [testbed-node-5] 2026-01-17 00:27:30.738105 | orchestrator | 2026-01-17 00:27:30.738128 | orchestrator | TASK [osism.commons.proxy : Remove system wide settings in environment file] *** 2026-01-17 00:27:30.738146 | orchestrator | Saturday 17 January 2026 00:27:30 +0000 (0:00:00.647) 0:00:12.587 ****** 2026-01-17 00:27:30.738165 | orchestrator | skipping: [testbed-node-3] 2026-01-17 00:27:30.738230 | orchestrator | skipping: [testbed-node-4] 2026-01-17 00:27:30.738250 | orchestrator | skipping: [testbed-node-5] 2026-01-17 00:27:30.738268 | orchestrator | skipping: [testbed-node-0] 2026-01-17 00:27:30.738286 | orchestrator | skipping: [testbed-node-1] 2026-01-17 00:27:30.738320 | orchestrator | skipping: [testbed-node-2] 2026-01-17 00:27:30.738338 | orchestrator | ok: [testbed-manager] 2026-01-17 00:27:30.738357 | orchestrator | 2026-01-17 00:27:30.738376 | orchestrator | TASK [osism.commons.resolvconf : Check minimum and maximum number of name servers] *** 2026-01-17 00:27:30.738396 | orchestrator | Saturday 17 January 2026 00:27:30 +0000 (0:00:00.434) 0:00:13.022 ****** 2026-01-17 00:27:30.738415 | orchestrator | skipping: [testbed-manager] 2026-01-17 00:27:30.738433 | orchestrator | skipping: [testbed-node-3] 2026-01-17 00:27:30.738470 | orchestrator | skipping: [testbed-node-4] 2026-01-17 00:27:43.896714 | orchestrator | skipping: [testbed-node-5] 2026-01-17 00:27:43.896823 | orchestrator | skipping: [testbed-node-0] 2026-01-17 00:27:43.896837 | orchestrator | skipping: [testbed-node-1] 2026-01-17 00:27:43.896846 | orchestrator | skipping: [testbed-node-2] 2026-01-17 00:27:43.896856 | orchestrator | 2026-01-17 00:27:43.896866 | orchestrator | TASK [osism.commons.resolvconf : Include resolvconf tasks] ********************* 2026-01-17 00:27:43.896876 | orchestrator | Saturday 17 January 2026 00:27:30 +0000 (0:00:00.222) 0:00:13.244 ****** 2026-01-17 00:27:43.896887 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-resolv.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-17 00:27:43.896977 | orchestrator | 2026-01-17 00:27:43.896993 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific installation tasks] *** 2026-01-17 00:27:43.897003 | orchestrator | Saturday 17 January 2026 00:27:31 +0000 (0:00:00.311) 0:00:13.556 ****** 2026-01-17 00:27:43.897024 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-17 00:27:43.897034 | orchestrator | 2026-01-17 00:27:43.897043 | orchestrator | TASK [osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf] *** 2026-01-17 00:27:43.897053 | orchestrator | Saturday 17 January 2026 00:27:31 +0000 (0:00:00.343) 0:00:13.899 ****** 2026-01-17 00:27:43.897061 | orchestrator | ok: [testbed-manager] 2026-01-17 00:27:43.897071 | orchestrator | ok: [testbed-node-1] 2026-01-17 00:27:43.897080 | orchestrator | ok: [testbed-node-4] 2026-01-17 00:27:43.897089 | orchestrator | ok: [testbed-node-0] 2026-01-17 00:27:43.897102 | orchestrator | ok: [testbed-node-5] 2026-01-17 00:27:43.897117 | orchestrator | ok: [testbed-node-2] 2026-01-17 00:27:43.897132 | orchestrator | ok: [testbed-node-3] 2026-01-17 00:27:43.897147 | orchestrator | 2026-01-17 00:27:43.897162 | orchestrator | TASK [osism.commons.resolvconf : Install package systemd-resolved] ************* 2026-01-17 00:27:43.897177 | orchestrator | Saturday 17 January 2026 00:27:32 +0000 (0:00:01.466) 0:00:15.366 ****** 2026-01-17 00:27:43.897193 | orchestrator | skipping: [testbed-manager] 2026-01-17 00:27:43.897208 | orchestrator | skipping: [testbed-node-3] 2026-01-17 00:27:43.897223 | orchestrator | skipping: [testbed-node-4] 2026-01-17 00:27:43.897237 | orchestrator | skipping: [testbed-node-5] 2026-01-17 00:27:43.897247 | orchestrator | skipping: [testbed-node-0] 2026-01-17 00:27:43.897257 | orchestrator | skipping: [testbed-node-1] 2026-01-17 00:27:43.897266 | orchestrator | skipping: [testbed-node-2] 2026-01-17 00:27:43.897276 | orchestrator | 2026-01-17 00:27:43.897286 | orchestrator | TASK [osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf] ***** 2026-01-17 00:27:43.897296 | orchestrator | Saturday 17 January 2026 00:27:33 +0000 (0:00:00.264) 0:00:15.630 ****** 2026-01-17 00:27:43.897305 | orchestrator | ok: [testbed-manager] 2026-01-17 00:27:43.897316 | orchestrator | ok: [testbed-node-3] 2026-01-17 00:27:43.897325 | orchestrator | ok: [testbed-node-4] 2026-01-17 00:27:43.897335 | orchestrator | ok: [testbed-node-5] 2026-01-17 00:27:43.897345 | orchestrator | ok: [testbed-node-0] 2026-01-17 00:27:43.897355 | orchestrator | ok: [testbed-node-1] 2026-01-17 00:27:43.897365 | orchestrator | ok: [testbed-node-2] 2026-01-17 00:27:43.897375 | orchestrator | 2026-01-17 00:27:43.897384 | orchestrator | TASK [osism.commons.resolvconf : Archive existing file /etc/resolv.conf] ******* 2026-01-17 00:27:43.897415 | orchestrator | Saturday 17 January 2026 00:27:33 +0000 (0:00:00.604) 0:00:16.234 ****** 2026-01-17 00:27:43.897425 | orchestrator | skipping: [testbed-manager] 2026-01-17 00:27:43.897433 | orchestrator | skipping: [testbed-node-3] 2026-01-17 00:27:43.897442 | orchestrator | skipping: [testbed-node-4] 2026-01-17 00:27:43.897457 | orchestrator | skipping: [testbed-node-5] 2026-01-17 00:27:43.897473 | orchestrator | skipping: [testbed-node-0] 2026-01-17 00:27:43.897488 | orchestrator | skipping: [testbed-node-1] 2026-01-17 00:27:43.897500 | orchestrator | skipping: [testbed-node-2] 2026-01-17 00:27:43.897509 | orchestrator | 2026-01-17 00:27:43.897518 | orchestrator | TASK [osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf] *** 2026-01-17 00:27:43.897528 | orchestrator | Saturday 17 January 2026 00:27:34 +0000 (0:00:00.366) 0:00:16.601 ****** 2026-01-17 00:27:43.897536 | orchestrator | ok: [testbed-manager] 2026-01-17 00:27:43.897545 | orchestrator | changed: [testbed-node-4] 2026-01-17 00:27:43.897553 | orchestrator | changed: [testbed-node-5] 2026-01-17 00:27:43.897562 | orchestrator | changed: [testbed-node-1] 2026-01-17 00:27:43.897570 | orchestrator | changed: [testbed-node-0] 2026-01-17 00:27:43.897579 | orchestrator | changed: [testbed-node-3] 2026-01-17 00:27:43.897587 | orchestrator | changed: [testbed-node-2] 2026-01-17 00:27:43.897596 | orchestrator | 2026-01-17 00:27:43.897604 | orchestrator | TASK [osism.commons.resolvconf : Copy configuration files] ********************* 2026-01-17 00:27:43.897613 | orchestrator | Saturday 17 January 2026 00:27:34 +0000 (0:00:00.562) 0:00:17.163 ****** 2026-01-17 00:27:43.897621 | orchestrator | ok: [testbed-manager] 2026-01-17 00:27:43.897630 | orchestrator | changed: [testbed-node-3] 2026-01-17 00:27:43.897638 | orchestrator | changed: [testbed-node-5] 2026-01-17 00:27:43.897647 | orchestrator | changed: [testbed-node-4] 2026-01-17 00:27:43.897655 | orchestrator | changed: [testbed-node-1] 2026-01-17 00:27:43.897664 | orchestrator | changed: [testbed-node-0] 2026-01-17 00:27:43.897672 | orchestrator | changed: [testbed-node-2] 2026-01-17 00:27:43.897680 | orchestrator | 2026-01-17 00:27:43.897689 | orchestrator | TASK [osism.commons.resolvconf : Start/enable systemd-resolved service] ******** 2026-01-17 00:27:43.897697 | orchestrator | Saturday 17 January 2026 00:27:35 +0000 (0:00:01.178) 0:00:18.345 ****** 2026-01-17 00:27:43.897706 | orchestrator | ok: [testbed-node-1] 2026-01-17 00:27:43.897714 | orchestrator | ok: [testbed-node-0] 2026-01-17 00:27:43.897723 | orchestrator | ok: [testbed-node-4] 2026-01-17 00:27:43.897731 | orchestrator | ok: [testbed-node-3] 2026-01-17 00:27:43.897740 | orchestrator | ok: [testbed-manager] 2026-01-17 00:27:43.897748 | orchestrator | ok: [testbed-node-5] 2026-01-17 00:27:43.897757 | orchestrator | ok: [testbed-node-2] 2026-01-17 00:27:43.897765 | orchestrator | 2026-01-17 00:27:43.897779 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific configuration tasks] *** 2026-01-17 00:27:43.897788 | orchestrator | Saturday 17 January 2026 00:27:37 +0000 (0:00:01.282) 0:00:19.627 ****** 2026-01-17 00:27:43.897815 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-17 00:27:43.897825 | orchestrator | 2026-01-17 00:27:43.897834 | orchestrator | TASK [osism.commons.resolvconf : Restart systemd-resolved service] ************* 2026-01-17 00:27:43.897843 | orchestrator | Saturday 17 January 2026 00:27:37 +0000 (0:00:00.384) 0:00:20.012 ****** 2026-01-17 00:27:43.897851 | orchestrator | skipping: [testbed-manager] 2026-01-17 00:27:43.897859 | orchestrator | changed: [testbed-node-0] 2026-01-17 00:27:43.897868 | orchestrator | changed: [testbed-node-1] 2026-01-17 00:27:43.897876 | orchestrator | changed: [testbed-node-4] 2026-01-17 00:27:43.897885 | orchestrator | changed: [testbed-node-3] 2026-01-17 00:27:43.897893 | orchestrator | changed: [testbed-node-5] 2026-01-17 00:27:43.897901 | orchestrator | changed: [testbed-node-2] 2026-01-17 00:27:43.897937 | orchestrator | 2026-01-17 00:27:43.897953 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2026-01-17 00:27:43.897981 | orchestrator | Saturday 17 January 2026 00:27:38 +0000 (0:00:01.389) 0:00:21.401 ****** 2026-01-17 00:27:43.897997 | orchestrator | ok: [testbed-manager] 2026-01-17 00:27:43.898014 | orchestrator | ok: [testbed-node-3] 2026-01-17 00:27:43.898120 | orchestrator | ok: [testbed-node-4] 2026-01-17 00:27:43.898129 | orchestrator | ok: [testbed-node-5] 2026-01-17 00:27:43.898137 | orchestrator | ok: [testbed-node-0] 2026-01-17 00:27:43.898146 | orchestrator | ok: [testbed-node-1] 2026-01-17 00:27:43.898154 | orchestrator | ok: [testbed-node-2] 2026-01-17 00:27:43.898162 | orchestrator | 2026-01-17 00:27:43.898171 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2026-01-17 00:27:43.898179 | orchestrator | Saturday 17 January 2026 00:27:39 +0000 (0:00:00.271) 0:00:21.673 ****** 2026-01-17 00:27:43.898188 | orchestrator | ok: [testbed-manager] 2026-01-17 00:27:43.898196 | orchestrator | ok: [testbed-node-3] 2026-01-17 00:27:43.898205 | orchestrator | ok: [testbed-node-4] 2026-01-17 00:27:43.898213 | orchestrator | ok: [testbed-node-5] 2026-01-17 00:27:43.898222 | orchestrator | ok: [testbed-node-0] 2026-01-17 00:27:43.898230 | orchestrator | ok: [testbed-node-1] 2026-01-17 00:27:43.898238 | orchestrator | ok: [testbed-node-2] 2026-01-17 00:27:43.898247 | orchestrator | 2026-01-17 00:27:43.898255 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2026-01-17 00:27:43.898264 | orchestrator | Saturday 17 January 2026 00:27:39 +0000 (0:00:00.233) 0:00:21.906 ****** 2026-01-17 00:27:43.898272 | orchestrator | ok: [testbed-manager] 2026-01-17 00:27:43.898281 | orchestrator | ok: [testbed-node-3] 2026-01-17 00:27:43.898289 | orchestrator | ok: [testbed-node-4] 2026-01-17 00:27:43.898297 | orchestrator | ok: [testbed-node-5] 2026-01-17 00:27:43.898306 | orchestrator | ok: [testbed-node-0] 2026-01-17 00:27:43.898314 | orchestrator | ok: [testbed-node-1] 2026-01-17 00:27:43.898322 | orchestrator | ok: [testbed-node-2] 2026-01-17 00:27:43.898331 | orchestrator | 2026-01-17 00:27:43.898339 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2026-01-17 00:27:43.898348 | orchestrator | Saturday 17 January 2026 00:27:39 +0000 (0:00:00.279) 0:00:22.186 ****** 2026-01-17 00:27:43.898357 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-17 00:27:43.898368 | orchestrator | 2026-01-17 00:27:43.898377 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2026-01-17 00:27:43.898385 | orchestrator | Saturday 17 January 2026 00:27:40 +0000 (0:00:00.341) 0:00:22.528 ****** 2026-01-17 00:27:43.898394 | orchestrator | ok: [testbed-manager] 2026-01-17 00:27:43.898402 | orchestrator | ok: [testbed-node-3] 2026-01-17 00:27:43.898410 | orchestrator | ok: [testbed-node-5] 2026-01-17 00:27:43.898419 | orchestrator | ok: [testbed-node-0] 2026-01-17 00:27:43.898427 | orchestrator | ok: [testbed-node-4] 2026-01-17 00:27:43.898436 | orchestrator | ok: [testbed-node-1] 2026-01-17 00:27:43.898444 | orchestrator | ok: [testbed-node-2] 2026-01-17 00:27:43.898452 | orchestrator | 2026-01-17 00:27:43.898461 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2026-01-17 00:27:43.898469 | orchestrator | Saturday 17 January 2026 00:27:40 +0000 (0:00:00.616) 0:00:23.145 ****** 2026-01-17 00:27:43.898478 | orchestrator | skipping: [testbed-manager] 2026-01-17 00:27:43.898486 | orchestrator | skipping: [testbed-node-3] 2026-01-17 00:27:43.898495 | orchestrator | skipping: [testbed-node-4] 2026-01-17 00:27:43.898503 | orchestrator | skipping: [testbed-node-5] 2026-01-17 00:27:43.898512 | orchestrator | skipping: [testbed-node-0] 2026-01-17 00:27:43.898520 | orchestrator | skipping: [testbed-node-1] 2026-01-17 00:27:43.898528 | orchestrator | skipping: [testbed-node-2] 2026-01-17 00:27:43.898537 | orchestrator | 2026-01-17 00:27:43.898545 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2026-01-17 00:27:43.898554 | orchestrator | Saturday 17 January 2026 00:27:40 +0000 (0:00:00.241) 0:00:23.387 ****** 2026-01-17 00:27:43.898562 | orchestrator | ok: [testbed-manager] 2026-01-17 00:27:43.898589 | orchestrator | ok: [testbed-node-4] 2026-01-17 00:27:43.898599 | orchestrator | ok: [testbed-node-3] 2026-01-17 00:27:43.898607 | orchestrator | ok: [testbed-node-5] 2026-01-17 00:27:43.898616 | orchestrator | changed: [testbed-node-1] 2026-01-17 00:27:43.898624 | orchestrator | changed: [testbed-node-0] 2026-01-17 00:27:43.898633 | orchestrator | changed: [testbed-node-2] 2026-01-17 00:27:43.898641 | orchestrator | 2026-01-17 00:27:43.898650 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2026-01-17 00:27:43.898658 | orchestrator | Saturday 17 January 2026 00:27:42 +0000 (0:00:01.120) 0:00:24.507 ****** 2026-01-17 00:27:43.898667 | orchestrator | ok: [testbed-manager] 2026-01-17 00:27:43.898675 | orchestrator | ok: [testbed-node-3] 2026-01-17 00:27:43.898684 | orchestrator | ok: [testbed-node-4] 2026-01-17 00:27:43.898692 | orchestrator | ok: [testbed-node-5] 2026-01-17 00:27:43.898700 | orchestrator | ok: [testbed-node-0] 2026-01-17 00:27:43.898709 | orchestrator | ok: [testbed-node-1] 2026-01-17 00:27:43.898717 | orchestrator | ok: [testbed-node-2] 2026-01-17 00:27:43.898726 | orchestrator | 2026-01-17 00:27:43.898745 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2026-01-17 00:27:43.898762 | orchestrator | Saturday 17 January 2026 00:27:42 +0000 (0:00:00.564) 0:00:25.071 ****** 2026-01-17 00:27:43.898779 | orchestrator | ok: [testbed-manager] 2026-01-17 00:27:43.898794 | orchestrator | ok: [testbed-node-4] 2026-01-17 00:27:43.898810 | orchestrator | ok: [testbed-node-3] 2026-01-17 00:27:43.898826 | orchestrator | changed: [testbed-node-1] 2026-01-17 00:27:43.898853 | orchestrator | changed: [testbed-node-0] 2026-01-17 00:28:27.378664 | orchestrator | ok: [testbed-node-5] 2026-01-17 00:28:27.378751 | orchestrator | changed: [testbed-node-2] 2026-01-17 00:28:27.378760 | orchestrator | 2026-01-17 00:28:27.378767 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2026-01-17 00:28:27.378774 | orchestrator | Saturday 17 January 2026 00:27:43 +0000 (0:00:01.237) 0:00:26.308 ****** 2026-01-17 00:28:27.378780 | orchestrator | ok: [testbed-node-4] 2026-01-17 00:28:27.378785 | orchestrator | ok: [testbed-node-3] 2026-01-17 00:28:27.378791 | orchestrator | ok: [testbed-node-5] 2026-01-17 00:28:27.378796 | orchestrator | changed: [testbed-manager] 2026-01-17 00:28:27.378802 | orchestrator | changed: [testbed-node-1] 2026-01-17 00:28:27.378807 | orchestrator | changed: [testbed-node-0] 2026-01-17 00:28:27.378813 | orchestrator | changed: [testbed-node-2] 2026-01-17 00:28:27.378818 | orchestrator | 2026-01-17 00:28:27.378824 | orchestrator | TASK [osism.services.rsyslog : Gather variables for each operating system] ***** 2026-01-17 00:28:27.378829 | orchestrator | Saturday 17 January 2026 00:28:00 +0000 (0:00:16.740) 0:00:43.049 ****** 2026-01-17 00:28:27.378835 | orchestrator | ok: [testbed-manager] 2026-01-17 00:28:27.378840 | orchestrator | ok: [testbed-node-3] 2026-01-17 00:28:27.378845 | orchestrator | ok: [testbed-node-4] 2026-01-17 00:28:27.378851 | orchestrator | ok: [testbed-node-5] 2026-01-17 00:28:27.378857 | orchestrator | ok: [testbed-node-0] 2026-01-17 00:28:27.378862 | orchestrator | ok: [testbed-node-1] 2026-01-17 00:28:27.378868 | orchestrator | ok: [testbed-node-2] 2026-01-17 00:28:27.378953 | orchestrator | 2026-01-17 00:28:27.378960 | orchestrator | TASK [osism.services.rsyslog : Set rsyslog_user variable to default value] ***** 2026-01-17 00:28:27.378966 | orchestrator | Saturday 17 January 2026 00:28:00 +0000 (0:00:00.229) 0:00:43.279 ****** 2026-01-17 00:28:27.378971 | orchestrator | ok: [testbed-manager] 2026-01-17 00:28:27.378977 | orchestrator | ok: [testbed-node-3] 2026-01-17 00:28:27.378982 | orchestrator | ok: [testbed-node-4] 2026-01-17 00:28:27.378988 | orchestrator | ok: [testbed-node-5] 2026-01-17 00:28:27.378993 | orchestrator | ok: [testbed-node-0] 2026-01-17 00:28:27.378999 | orchestrator | ok: [testbed-node-1] 2026-01-17 00:28:27.379028 | orchestrator | ok: [testbed-node-2] 2026-01-17 00:28:27.379038 | orchestrator | 2026-01-17 00:28:27.379047 | orchestrator | TASK [osism.services.rsyslog : Set rsyslog_workdir variable to default value] *** 2026-01-17 00:28:27.379056 | orchestrator | Saturday 17 January 2026 00:28:01 +0000 (0:00:00.247) 0:00:43.526 ****** 2026-01-17 00:28:27.379089 | orchestrator | ok: [testbed-manager] 2026-01-17 00:28:27.379099 | orchestrator | ok: [testbed-node-3] 2026-01-17 00:28:27.379108 | orchestrator | ok: [testbed-node-4] 2026-01-17 00:28:27.379116 | orchestrator | ok: [testbed-node-5] 2026-01-17 00:28:27.379124 | orchestrator | ok: [testbed-node-0] 2026-01-17 00:28:27.379133 | orchestrator | ok: [testbed-node-1] 2026-01-17 00:28:27.379151 | orchestrator | ok: [testbed-node-2] 2026-01-17 00:28:27.379160 | orchestrator | 2026-01-17 00:28:27.379168 | orchestrator | TASK [osism.services.rsyslog : Include distribution specific install tasks] **** 2026-01-17 00:28:27.379177 | orchestrator | Saturday 17 January 2026 00:28:01 +0000 (0:00:00.279) 0:00:43.806 ****** 2026-01-17 00:28:27.379188 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-17 00:28:27.379198 | orchestrator | 2026-01-17 00:28:27.379208 | orchestrator | TASK [osism.services.rsyslog : Install rsyslog package] ************************ 2026-01-17 00:28:27.379235 | orchestrator | Saturday 17 January 2026 00:28:01 +0000 (0:00:00.301) 0:00:44.107 ****** 2026-01-17 00:28:27.379244 | orchestrator | ok: [testbed-manager] 2026-01-17 00:28:27.379252 | orchestrator | ok: [testbed-node-4] 2026-01-17 00:28:27.379261 | orchestrator | ok: [testbed-node-1] 2026-01-17 00:28:27.379270 | orchestrator | ok: [testbed-node-0] 2026-01-17 00:28:27.379280 | orchestrator | ok: [testbed-node-5] 2026-01-17 00:28:27.379289 | orchestrator | ok: [testbed-node-2] 2026-01-17 00:28:27.379298 | orchestrator | ok: [testbed-node-3] 2026-01-17 00:28:27.379307 | orchestrator | 2026-01-17 00:28:27.379321 | orchestrator | TASK [osism.services.rsyslog : Copy rsyslog.conf configuration file] *********** 2026-01-17 00:28:27.379330 | orchestrator | Saturday 17 January 2026 00:28:03 +0000 (0:00:01.915) 0:00:46.022 ****** 2026-01-17 00:28:27.379340 | orchestrator | changed: [testbed-manager] 2026-01-17 00:28:27.379350 | orchestrator | changed: [testbed-node-4] 2026-01-17 00:28:27.379359 | orchestrator | changed: [testbed-node-3] 2026-01-17 00:28:27.379370 | orchestrator | changed: [testbed-node-5] 2026-01-17 00:28:27.379379 | orchestrator | changed: [testbed-node-0] 2026-01-17 00:28:27.379389 | orchestrator | changed: [testbed-node-2] 2026-01-17 00:28:27.379398 | orchestrator | changed: [testbed-node-1] 2026-01-17 00:28:27.379408 | orchestrator | 2026-01-17 00:28:27.379418 | orchestrator | TASK [osism.services.rsyslog : Manage rsyslog service] ************************* 2026-01-17 00:28:27.379428 | orchestrator | Saturday 17 January 2026 00:28:04 +0000 (0:00:01.185) 0:00:47.208 ****** 2026-01-17 00:28:27.379437 | orchestrator | ok: [testbed-manager] 2026-01-17 00:28:27.379447 | orchestrator | ok: [testbed-node-4] 2026-01-17 00:28:27.379455 | orchestrator | ok: [testbed-node-3] 2026-01-17 00:28:27.379465 | orchestrator | ok: [testbed-node-0] 2026-01-17 00:28:27.379474 | orchestrator | ok: [testbed-node-5] 2026-01-17 00:28:27.379484 | orchestrator | ok: [testbed-node-1] 2026-01-17 00:28:27.379493 | orchestrator | ok: [testbed-node-2] 2026-01-17 00:28:27.379503 | orchestrator | 2026-01-17 00:28:27.379512 | orchestrator | TASK [osism.services.rsyslog : Include fluentd tasks] ************************** 2026-01-17 00:28:27.379522 | orchestrator | Saturday 17 January 2026 00:28:05 +0000 (0:00:00.883) 0:00:48.092 ****** 2026-01-17 00:28:27.379533 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/fluentd.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-17 00:28:27.379544 | orchestrator | 2026-01-17 00:28:27.379554 | orchestrator | TASK [osism.services.rsyslog : Forward syslog message to local fluentd daemon] *** 2026-01-17 00:28:27.379564 | orchestrator | Saturday 17 January 2026 00:28:05 +0000 (0:00:00.296) 0:00:48.388 ****** 2026-01-17 00:28:27.379574 | orchestrator | changed: [testbed-manager] 2026-01-17 00:28:27.379583 | orchestrator | changed: [testbed-node-3] 2026-01-17 00:28:27.379591 | orchestrator | changed: [testbed-node-0] 2026-01-17 00:28:27.379600 | orchestrator | changed: [testbed-node-4] 2026-01-17 00:28:27.379609 | orchestrator | changed: [testbed-node-1] 2026-01-17 00:28:27.379632 | orchestrator | changed: [testbed-node-5] 2026-01-17 00:28:27.379642 | orchestrator | changed: [testbed-node-2] 2026-01-17 00:28:27.379649 | orchestrator | 2026-01-17 00:28:27.379677 | orchestrator | TASK [osism.services.rsyslog : Include additional log server tasks] ************ 2026-01-17 00:28:27.379688 | orchestrator | Saturday 17 January 2026 00:28:07 +0000 (0:00:01.237) 0:00:49.626 ****** 2026-01-17 00:28:27.379697 | orchestrator | skipping: [testbed-manager] 2026-01-17 00:28:27.379762 | orchestrator | skipping: [testbed-node-3] 2026-01-17 00:28:27.379774 | orchestrator | skipping: [testbed-node-4] 2026-01-17 00:28:27.379782 | orchestrator | skipping: [testbed-node-5] 2026-01-17 00:28:27.379787 | orchestrator | skipping: [testbed-node-0] 2026-01-17 00:28:27.379792 | orchestrator | skipping: [testbed-node-1] 2026-01-17 00:28:27.379798 | orchestrator | skipping: [testbed-node-2] 2026-01-17 00:28:27.379803 | orchestrator | 2026-01-17 00:28:27.379809 | orchestrator | TASK [osism.services.rsyslog : Include logrotate tasks] ************************ 2026-01-17 00:28:27.379830 | orchestrator | Saturday 17 January 2026 00:28:07 +0000 (0:00:00.254) 0:00:49.880 ****** 2026-01-17 00:28:27.379836 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/logrotate.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-17 00:28:27.379842 | orchestrator | 2026-01-17 00:28:27.379848 | orchestrator | TASK [osism.services.rsyslog : Ensure logrotate package is installed] ********** 2026-01-17 00:28:27.379853 | orchestrator | Saturday 17 January 2026 00:28:07 +0000 (0:00:00.319) 0:00:50.200 ****** 2026-01-17 00:28:27.379859 | orchestrator | ok: [testbed-manager] 2026-01-17 00:28:27.379864 | orchestrator | ok: [testbed-node-1] 2026-01-17 00:28:27.379890 | orchestrator | ok: [testbed-node-3] 2026-01-17 00:28:27.379899 | orchestrator | ok: [testbed-node-5] 2026-01-17 00:28:27.379908 | orchestrator | ok: [testbed-node-4] 2026-01-17 00:28:27.379917 | orchestrator | ok: [testbed-node-0] 2026-01-17 00:28:27.379926 | orchestrator | ok: [testbed-node-2] 2026-01-17 00:28:27.379935 | orchestrator | 2026-01-17 00:28:27.379945 | orchestrator | TASK [osism.services.rsyslog : Configure logrotate for rsyslog] **************** 2026-01-17 00:28:27.379955 | orchestrator | Saturday 17 January 2026 00:28:09 +0000 (0:00:01.960) 0:00:52.161 ****** 2026-01-17 00:28:27.379964 | orchestrator | changed: [testbed-manager] 2026-01-17 00:28:27.379973 | orchestrator | changed: [testbed-node-0] 2026-01-17 00:28:27.379983 | orchestrator | changed: [testbed-node-4] 2026-01-17 00:28:27.379992 | orchestrator | changed: [testbed-node-3] 2026-01-17 00:28:27.380002 | orchestrator | changed: [testbed-node-2] 2026-01-17 00:28:27.380012 | orchestrator | changed: [testbed-node-5] 2026-01-17 00:28:27.380021 | orchestrator | changed: [testbed-node-1] 2026-01-17 00:28:27.380031 | orchestrator | 2026-01-17 00:28:27.380040 | orchestrator | TASK [osism.commons.systohc : Install util-linux-extra package] **************** 2026-01-17 00:28:27.380049 | orchestrator | Saturday 17 January 2026 00:28:10 +0000 (0:00:01.248) 0:00:53.409 ****** 2026-01-17 00:28:27.380058 | orchestrator | changed: [testbed-node-4] 2026-01-17 00:28:27.380068 | orchestrator | changed: [testbed-node-1] 2026-01-17 00:28:27.380079 | orchestrator | changed: [testbed-node-0] 2026-01-17 00:28:27.380088 | orchestrator | changed: [testbed-node-2] 2026-01-17 00:28:27.380097 | orchestrator | changed: [testbed-node-5] 2026-01-17 00:28:27.380107 | orchestrator | changed: [testbed-node-3] 2026-01-17 00:28:27.380115 | orchestrator | changed: [testbed-manager] 2026-01-17 00:28:27.380124 | orchestrator | 2026-01-17 00:28:27.380134 | orchestrator | TASK [osism.commons.systohc : Sync hardware clock] ***************************** 2026-01-17 00:28:27.380144 | orchestrator | Saturday 17 January 2026 00:28:24 +0000 (0:00:13.125) 0:01:06.535 ****** 2026-01-17 00:28:27.380153 | orchestrator | ok: [testbed-manager] 2026-01-17 00:28:27.380162 | orchestrator | ok: [testbed-node-1] 2026-01-17 00:28:27.380171 | orchestrator | ok: [testbed-node-4] 2026-01-17 00:28:27.380180 | orchestrator | ok: [testbed-node-2] 2026-01-17 00:28:27.380190 | orchestrator | ok: [testbed-node-5] 2026-01-17 00:28:27.380199 | orchestrator | ok: [testbed-node-0] 2026-01-17 00:28:27.380219 | orchestrator | ok: [testbed-node-3] 2026-01-17 00:28:27.380228 | orchestrator | 2026-01-17 00:28:27.380237 | orchestrator | TASK [osism.commons.configfs : Start sys-kernel-config mount] ****************** 2026-01-17 00:28:27.380247 | orchestrator | Saturday 17 January 2026 00:28:25 +0000 (0:00:01.551) 0:01:08.087 ****** 2026-01-17 00:28:27.380256 | orchestrator | ok: [testbed-manager] 2026-01-17 00:28:27.380266 | orchestrator | ok: [testbed-node-4] 2026-01-17 00:28:27.380275 | orchestrator | ok: [testbed-node-0] 2026-01-17 00:28:27.380284 | orchestrator | ok: [testbed-node-3] 2026-01-17 00:28:27.380293 | orchestrator | ok: [testbed-node-1] 2026-01-17 00:28:27.380302 | orchestrator | ok: [testbed-node-5] 2026-01-17 00:28:27.380311 | orchestrator | ok: [testbed-node-2] 2026-01-17 00:28:27.380320 | orchestrator | 2026-01-17 00:28:27.380330 | orchestrator | TASK [osism.commons.packages : Gather variables for each operating system] ***** 2026-01-17 00:28:27.380340 | orchestrator | Saturday 17 January 2026 00:28:26 +0000 (0:00:01.047) 0:01:09.134 ****** 2026-01-17 00:28:27.380349 | orchestrator | ok: [testbed-manager] 2026-01-17 00:28:27.380357 | orchestrator | ok: [testbed-node-3] 2026-01-17 00:28:27.380367 | orchestrator | ok: [testbed-node-4] 2026-01-17 00:28:27.380376 | orchestrator | ok: [testbed-node-5] 2026-01-17 00:28:27.380386 | orchestrator | ok: [testbed-node-0] 2026-01-17 00:28:27.380395 | orchestrator | ok: [testbed-node-1] 2026-01-17 00:28:27.380404 | orchestrator | ok: [testbed-node-2] 2026-01-17 00:28:27.380414 | orchestrator | 2026-01-17 00:28:27.380423 | orchestrator | TASK [osism.commons.packages : Set required_packages_distribution variable to default value] *** 2026-01-17 00:28:27.380433 | orchestrator | Saturday 17 January 2026 00:28:26 +0000 (0:00:00.193) 0:01:09.328 ****** 2026-01-17 00:28:27.380442 | orchestrator | ok: [testbed-manager] 2026-01-17 00:28:27.380451 | orchestrator | ok: [testbed-node-3] 2026-01-17 00:28:27.380460 | orchestrator | ok: [testbed-node-4] 2026-01-17 00:28:27.380470 | orchestrator | ok: [testbed-node-5] 2026-01-17 00:28:27.380479 | orchestrator | ok: [testbed-node-0] 2026-01-17 00:28:27.380488 | orchestrator | ok: [testbed-node-1] 2026-01-17 00:28:27.380497 | orchestrator | ok: [testbed-node-2] 2026-01-17 00:28:27.380504 | orchestrator | 2026-01-17 00:28:27.380510 | orchestrator | TASK [osism.commons.packages : Include distribution specific package tasks] **** 2026-01-17 00:28:27.380515 | orchestrator | Saturday 17 January 2026 00:28:27 +0000 (0:00:00.203) 0:01:09.532 ****** 2026-01-17 00:28:27.380530 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/packages/tasks/package-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-17 00:28:27.380541 | orchestrator | 2026-01-17 00:28:27.380561 | orchestrator | TASK [osism.commons.packages : Install needrestart package] ******************** 2026-01-17 00:30:41.757982 | orchestrator | Saturday 17 January 2026 00:28:27 +0000 (0:00:00.261) 0:01:09.794 ****** 2026-01-17 00:30:41.758197 | orchestrator | ok: [testbed-manager] 2026-01-17 00:30:41.758227 | orchestrator | ok: [testbed-node-1] 2026-01-17 00:30:41.758239 | orchestrator | ok: [testbed-node-3] 2026-01-17 00:30:41.758251 | orchestrator | ok: [testbed-node-5] 2026-01-17 00:30:41.758261 | orchestrator | ok: [testbed-node-2] 2026-01-17 00:30:41.758273 | orchestrator | ok: [testbed-node-4] 2026-01-17 00:30:41.758284 | orchestrator | ok: [testbed-node-0] 2026-01-17 00:30:41.758295 | orchestrator | 2026-01-17 00:30:41.758307 | orchestrator | TASK [osism.commons.packages : Set needrestart mode] *************************** 2026-01-17 00:30:41.758318 | orchestrator | Saturday 17 January 2026 00:28:29 +0000 (0:00:01.819) 0:01:11.614 ****** 2026-01-17 00:30:41.758329 | orchestrator | changed: [testbed-manager] 2026-01-17 00:30:41.758341 | orchestrator | changed: [testbed-node-0] 2026-01-17 00:30:41.758352 | orchestrator | changed: [testbed-node-1] 2026-01-17 00:30:41.758362 | orchestrator | changed: [testbed-node-3] 2026-01-17 00:30:41.758373 | orchestrator | changed: [testbed-node-4] 2026-01-17 00:30:41.758383 | orchestrator | changed: [testbed-node-5] 2026-01-17 00:30:41.758394 | orchestrator | changed: [testbed-node-2] 2026-01-17 00:30:41.758406 | orchestrator | 2026-01-17 00:30:41.758447 | orchestrator | TASK [osism.commons.packages : Set apt_cache_valid_time variable to default value] *** 2026-01-17 00:30:41.758461 | orchestrator | Saturday 17 January 2026 00:28:29 +0000 (0:00:00.618) 0:01:12.233 ****** 2026-01-17 00:30:41.758473 | orchestrator | ok: [testbed-manager] 2026-01-17 00:30:41.758490 | orchestrator | ok: [testbed-node-3] 2026-01-17 00:30:41.758510 | orchestrator | ok: [testbed-node-4] 2026-01-17 00:30:41.758528 | orchestrator | ok: [testbed-node-5] 2026-01-17 00:30:41.758546 | orchestrator | ok: [testbed-node-0] 2026-01-17 00:30:41.758564 | orchestrator | ok: [testbed-node-1] 2026-01-17 00:30:41.758583 | orchestrator | ok: [testbed-node-2] 2026-01-17 00:30:41.758603 | orchestrator | 2026-01-17 00:30:41.758624 | orchestrator | TASK [osism.commons.packages : Update package cache] *************************** 2026-01-17 00:30:41.758644 | orchestrator | Saturday 17 January 2026 00:28:30 +0000 (0:00:00.249) 0:01:12.482 ****** 2026-01-17 00:30:41.758663 | orchestrator | ok: [testbed-manager] 2026-01-17 00:30:41.758676 | orchestrator | ok: [testbed-node-4] 2026-01-17 00:30:41.758688 | orchestrator | ok: [testbed-node-0] 2026-01-17 00:30:41.758700 | orchestrator | ok: [testbed-node-1] 2026-01-17 00:30:41.758712 | orchestrator | ok: [testbed-node-2] 2026-01-17 00:30:41.758724 | orchestrator | ok: [testbed-node-3] 2026-01-17 00:30:41.758736 | orchestrator | ok: [testbed-node-5] 2026-01-17 00:30:41.758748 | orchestrator | 2026-01-17 00:30:41.758805 | orchestrator | TASK [osism.commons.packages : Download upgrade packages] ********************** 2026-01-17 00:30:41.758819 | orchestrator | Saturday 17 January 2026 00:28:31 +0000 (0:00:01.389) 0:01:13.872 ****** 2026-01-17 00:30:41.758831 | orchestrator | changed: [testbed-manager] 2026-01-17 00:30:41.758842 | orchestrator | changed: [testbed-node-4] 2026-01-17 00:30:41.758857 | orchestrator | changed: [testbed-node-0] 2026-01-17 00:30:41.758882 | orchestrator | changed: [testbed-node-1] 2026-01-17 00:30:41.758907 | orchestrator | changed: [testbed-node-3] 2026-01-17 00:30:41.758925 | orchestrator | changed: [testbed-node-5] 2026-01-17 00:30:41.758942 | orchestrator | changed: [testbed-node-2] 2026-01-17 00:30:41.758961 | orchestrator | 2026-01-17 00:30:41.758980 | orchestrator | TASK [osism.commons.packages : Upgrade packages] ******************************* 2026-01-17 00:30:41.759003 | orchestrator | Saturday 17 January 2026 00:28:33 +0000 (0:00:02.243) 0:01:16.116 ****** 2026-01-17 00:30:41.759027 | orchestrator | ok: [testbed-manager] 2026-01-17 00:30:41.759047 | orchestrator | ok: [testbed-node-4] 2026-01-17 00:30:41.759066 | orchestrator | ok: [testbed-node-3] 2026-01-17 00:30:41.759086 | orchestrator | ok: [testbed-node-5] 2026-01-17 00:30:41.759107 | orchestrator | ok: [testbed-node-1] 2026-01-17 00:30:41.759127 | orchestrator | ok: [testbed-node-0] 2026-01-17 00:30:41.759143 | orchestrator | ok: [testbed-node-2] 2026-01-17 00:30:41.759153 | orchestrator | 2026-01-17 00:30:41.759164 | orchestrator | TASK [osism.commons.packages : Download required packages] ********************* 2026-01-17 00:30:41.759175 | orchestrator | Saturday 17 January 2026 00:28:36 +0000 (0:00:03.103) 0:01:19.219 ****** 2026-01-17 00:30:41.759186 | orchestrator | ok: [testbed-manager] 2026-01-17 00:30:41.759197 | orchestrator | ok: [testbed-node-4] 2026-01-17 00:30:41.759207 | orchestrator | ok: [testbed-node-2] 2026-01-17 00:30:41.759218 | orchestrator | ok: [testbed-node-3] 2026-01-17 00:30:41.759228 | orchestrator | ok: [testbed-node-1] 2026-01-17 00:30:41.759239 | orchestrator | ok: [testbed-node-5] 2026-01-17 00:30:41.759250 | orchestrator | ok: [testbed-node-0] 2026-01-17 00:30:41.759260 | orchestrator | 2026-01-17 00:30:41.759271 | orchestrator | TASK [osism.commons.packages : Install required packages] ********************** 2026-01-17 00:30:41.759282 | orchestrator | Saturday 17 January 2026 00:29:11 +0000 (0:00:34.455) 0:01:53.674 ****** 2026-01-17 00:30:41.759293 | orchestrator | changed: [testbed-manager] 2026-01-17 00:30:41.759303 | orchestrator | changed: [testbed-node-0] 2026-01-17 00:30:41.759314 | orchestrator | changed: [testbed-node-1] 2026-01-17 00:30:41.759325 | orchestrator | changed: [testbed-node-4] 2026-01-17 00:30:41.759335 | orchestrator | changed: [testbed-node-2] 2026-01-17 00:30:41.759346 | orchestrator | changed: [testbed-node-5] 2026-01-17 00:30:41.759356 | orchestrator | changed: [testbed-node-3] 2026-01-17 00:30:41.759381 | orchestrator | 2026-01-17 00:30:41.759392 | orchestrator | TASK [osism.commons.packages : Remove useless packages from the cache] ********* 2026-01-17 00:30:41.759403 | orchestrator | Saturday 17 January 2026 00:30:24 +0000 (0:01:13.162) 0:03:06.837 ****** 2026-01-17 00:30:41.759414 | orchestrator | ok: [testbed-manager] 2026-01-17 00:30:41.759424 | orchestrator | ok: [testbed-node-4] 2026-01-17 00:30:41.759435 | orchestrator | ok: [testbed-node-5] 2026-01-17 00:30:41.759446 | orchestrator | ok: [testbed-node-3] 2026-01-17 00:30:41.759457 | orchestrator | ok: [testbed-node-1] 2026-01-17 00:30:41.759467 | orchestrator | ok: [testbed-node-2] 2026-01-17 00:30:41.759478 | orchestrator | ok: [testbed-node-0] 2026-01-17 00:30:41.759489 | orchestrator | 2026-01-17 00:30:41.759499 | orchestrator | TASK [osism.commons.packages : Remove dependencies that are no longer required] *** 2026-01-17 00:30:41.759511 | orchestrator | Saturday 17 January 2026 00:30:26 +0000 (0:00:01.972) 0:03:08.809 ****** 2026-01-17 00:30:41.759521 | orchestrator | ok: [testbed-node-4] 2026-01-17 00:30:41.759532 | orchestrator | ok: [testbed-node-1] 2026-01-17 00:30:41.759543 | orchestrator | ok: [testbed-node-5] 2026-01-17 00:30:41.759553 | orchestrator | ok: [testbed-node-0] 2026-01-17 00:30:41.759579 | orchestrator | ok: [testbed-node-2] 2026-01-17 00:30:41.759590 | orchestrator | ok: [testbed-node-3] 2026-01-17 00:30:41.759600 | orchestrator | changed: [testbed-manager] 2026-01-17 00:30:41.759611 | orchestrator | 2026-01-17 00:30:41.759622 | orchestrator | TASK [osism.commons.sysctl : Include sysctl tasks] ***************************** 2026-01-17 00:30:41.759633 | orchestrator | Saturday 17 January 2026 00:30:39 +0000 (0:00:13.166) 0:03:21.976 ****** 2026-01-17 00:30:41.759678 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'elasticsearch', 'value': [{'name': 'vm.max_map_count', 'value': 262144}]}) 2026-01-17 00:30:41.759703 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'rabbitmq', 'value': [{'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}, {'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}, {'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}, {'name': 'net.core.wmem_max', 'value': 16777216}, {'name': 'net.core.rmem_max', 'value': 16777216}, {'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}, {'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}, {'name': 'net.core.somaxconn', 'value': 4096}, {'name': 'net.ipv4.tcp_syncookies', 'value': 0}, {'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}]}) 2026-01-17 00:30:41.759718 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'generic', 'value': [{'name': 'vm.swappiness', 'value': 1}]}) 2026-01-17 00:30:41.759731 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'compute', 'value': [{'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}]}) 2026-01-17 00:30:41.759743 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'network', 'value': [{'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}]}) 2026-01-17 00:30:41.759782 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'k3s_node', 'value': [{'name': 'fs.inotify.max_user_instances', 'value': 1024}]}) 2026-01-17 00:30:41.759813 | orchestrator | 2026-01-17 00:30:41.759825 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on elasticsearch] *********** 2026-01-17 00:30:41.759836 | orchestrator | Saturday 17 January 2026 00:30:39 +0000 (0:00:00.406) 0:03:22.383 ****** 2026-01-17 00:30:41.759852 | orchestrator | skipping: [testbed-manager] => (item={'name': 'vm.max_map_count', 'value': 262144})  2026-01-17 00:30:41.759863 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'vm.max_map_count', 'value': 262144})  2026-01-17 00:30:41.759879 | orchestrator | skipping: [testbed-manager] 2026-01-17 00:30:41.759896 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'vm.max_map_count', 'value': 262144})  2026-01-17 00:30:41.759926 | orchestrator | skipping: [testbed-node-3] 2026-01-17 00:30:41.759944 | orchestrator | skipping: [testbed-node-4] 2026-01-17 00:30:41.759962 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'vm.max_map_count', 'value': 262144})  2026-01-17 00:30:41.759980 | orchestrator | skipping: [testbed-node-5] 2026-01-17 00:30:41.759997 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-01-17 00:30:41.760015 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-01-17 00:30:41.760034 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-01-17 00:30:41.760053 | orchestrator | 2026-01-17 00:30:41.760072 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on rabbitmq] **************** 2026-01-17 00:30:41.760090 | orchestrator | Saturday 17 January 2026 00:30:41 +0000 (0:00:01.723) 0:03:24.106 ****** 2026-01-17 00:30:41.760108 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2026-01-17 00:30:41.760129 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2026-01-17 00:30:41.760149 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2026-01-17 00:30:41.760167 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2026-01-17 00:30:41.760187 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2026-01-17 00:30:41.760228 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2026-01-17 00:30:48.980578 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2026-01-17 00:30:48.980685 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2026-01-17 00:30:48.980702 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2026-01-17 00:30:48.980715 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2026-01-17 00:30:48.980727 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2026-01-17 00:30:48.980738 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2026-01-17 00:30:48.980809 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2026-01-17 00:30:48.980831 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2026-01-17 00:30:48.980850 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2026-01-17 00:30:48.980868 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2026-01-17 00:30:48.980880 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2026-01-17 00:30:48.980891 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2026-01-17 00:30:48.980902 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2026-01-17 00:30:48.980938 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2026-01-17 00:30:48.980950 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2026-01-17 00:30:48.980962 | orchestrator | skipping: [testbed-manager] 2026-01-17 00:30:48.980989 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2026-01-17 00:30:48.981001 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2026-01-17 00:30:48.981012 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2026-01-17 00:30:48.981023 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2026-01-17 00:30:48.981034 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2026-01-17 00:30:48.981044 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2026-01-17 00:30:48.981055 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2026-01-17 00:30:48.981065 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2026-01-17 00:30:48.981076 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2026-01-17 00:30:48.981087 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2026-01-17 00:30:48.981098 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2026-01-17 00:30:48.981110 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2026-01-17 00:30:48.981122 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2026-01-17 00:30:48.981135 | orchestrator | skipping: [testbed-node-3] 2026-01-17 00:30:48.981147 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2026-01-17 00:30:48.981159 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2026-01-17 00:30:48.981171 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2026-01-17 00:30:48.981183 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2026-01-17 00:30:48.981195 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2026-01-17 00:30:48.981207 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2026-01-17 00:30:48.981219 | orchestrator | skipping: [testbed-node-4] 2026-01-17 00:30:48.981231 | orchestrator | skipping: [testbed-node-5] 2026-01-17 00:30:48.981243 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2026-01-17 00:30:48.981259 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2026-01-17 00:30:48.981285 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2026-01-17 00:30:48.981306 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2026-01-17 00:30:48.981318 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2026-01-17 00:30:48.981357 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2026-01-17 00:30:48.981371 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2026-01-17 00:30:48.981383 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2026-01-17 00:30:48.981404 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2026-01-17 00:30:48.981417 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2026-01-17 00:30:48.981429 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2026-01-17 00:30:48.981442 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2026-01-17 00:30:48.981454 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2026-01-17 00:30:48.981467 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2026-01-17 00:30:48.981478 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2026-01-17 00:30:48.981489 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2026-01-17 00:30:48.981500 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2026-01-17 00:30:48.981511 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2026-01-17 00:30:48.981522 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2026-01-17 00:30:48.981532 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2026-01-17 00:30:48.981543 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2026-01-17 00:30:48.981553 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2026-01-17 00:30:48.981564 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2026-01-17 00:30:48.981575 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2026-01-17 00:30:48.981585 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2026-01-17 00:30:48.981596 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2026-01-17 00:30:48.981607 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2026-01-17 00:30:48.981617 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2026-01-17 00:30:48.981628 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2026-01-17 00:30:48.981639 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2026-01-17 00:30:48.981650 | orchestrator | 2026-01-17 00:30:48.981661 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on generic] ***************** 2026-01-17 00:30:48.981672 | orchestrator | Saturday 17 January 2026 00:30:47 +0000 (0:00:06.158) 0:03:30.264 ****** 2026-01-17 00:30:48.981682 | orchestrator | changed: [testbed-manager] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-01-17 00:30:48.981693 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-01-17 00:30:48.981704 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-01-17 00:30:48.981714 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-01-17 00:30:48.981725 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-01-17 00:30:48.981735 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-01-17 00:30:48.981771 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-01-17 00:30:48.981788 | orchestrator | 2026-01-17 00:30:48.981799 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on compute] ***************** 2026-01-17 00:30:48.981810 | orchestrator | Saturday 17 January 2026 00:30:48 +0000 (0:00:00.631) 0:03:30.896 ****** 2026-01-17 00:30:48.981820 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-01-17 00:30:48.981838 | orchestrator | skipping: [testbed-manager] 2026-01-17 00:30:48.981849 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-01-17 00:30:48.981860 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-01-17 00:30:48.981871 | orchestrator | skipping: [testbed-node-0] 2026-01-17 00:30:48.981881 | orchestrator | skipping: [testbed-node-1] 2026-01-17 00:30:48.981897 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-01-17 00:30:48.981908 | orchestrator | skipping: [testbed-node-2] 2026-01-17 00:30:48.981919 | orchestrator | changed: [testbed-node-3] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-01-17 00:30:48.981930 | orchestrator | changed: [testbed-node-4] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-01-17 00:30:48.981949 | orchestrator | changed: [testbed-node-5] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-01-17 00:31:02.201296 | orchestrator | 2026-01-17 00:31:02.201414 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on network] ***************** 2026-01-17 00:31:02.201431 | orchestrator | Saturday 17 January 2026 00:30:48 +0000 (0:00:00.499) 0:03:31.395 ****** 2026-01-17 00:31:02.201443 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-01-17 00:31:02.201457 | orchestrator | skipping: [testbed-manager] 2026-01-17 00:31:02.201469 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-01-17 00:31:02.201481 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-01-17 00:31:02.201492 | orchestrator | skipping: [testbed-node-3] 2026-01-17 00:31:02.201503 | orchestrator | skipping: [testbed-node-4] 2026-01-17 00:31:02.201514 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-01-17 00:31:02.201524 | orchestrator | skipping: [testbed-node-5] 2026-01-17 00:31:02.201535 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-01-17 00:31:02.201546 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-01-17 00:31:02.201557 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-01-17 00:31:02.201567 | orchestrator | 2026-01-17 00:31:02.201578 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on k3s_node] **************** 2026-01-17 00:31:02.201589 | orchestrator | Saturday 17 January 2026 00:30:49 +0000 (0:00:00.629) 0:03:32.025 ****** 2026-01-17 00:31:02.201600 | orchestrator | skipping: [testbed-manager] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2026-01-17 00:31:02.201611 | orchestrator | skipping: [testbed-manager] 2026-01-17 00:31:02.201621 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2026-01-17 00:31:02.201632 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2026-01-17 00:31:02.201690 | orchestrator | skipping: [testbed-node-0] 2026-01-17 00:31:02.201702 | orchestrator | skipping: [testbed-node-1] 2026-01-17 00:31:02.201713 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2026-01-17 00:31:02.201724 | orchestrator | skipping: [testbed-node-2] 2026-01-17 00:31:02.201837 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2026-01-17 00:31:02.201857 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2026-01-17 00:31:02.201871 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2026-01-17 00:31:02.201908 | orchestrator | 2026-01-17 00:31:02.201921 | orchestrator | TASK [osism.commons.limits : Include limits tasks] ***************************** 2026-01-17 00:31:02.201934 | orchestrator | Saturday 17 January 2026 00:30:50 +0000 (0:00:00.647) 0:03:32.673 ****** 2026-01-17 00:31:02.201947 | orchestrator | skipping: [testbed-manager] 2026-01-17 00:31:02.201959 | orchestrator | skipping: [testbed-node-3] 2026-01-17 00:31:02.201971 | orchestrator | skipping: [testbed-node-4] 2026-01-17 00:31:02.201983 | orchestrator | skipping: [testbed-node-5] 2026-01-17 00:31:02.201996 | orchestrator | skipping: [testbed-node-0] 2026-01-17 00:31:02.202008 | orchestrator | skipping: [testbed-node-1] 2026-01-17 00:31:02.202092 | orchestrator | skipping: [testbed-node-2] 2026-01-17 00:31:02.202105 | orchestrator | 2026-01-17 00:31:02.202126 | orchestrator | TASK [osism.commons.services : Populate service facts] ************************* 2026-01-17 00:31:02.202138 | orchestrator | Saturday 17 January 2026 00:30:50 +0000 (0:00:00.340) 0:03:33.013 ****** 2026-01-17 00:31:02.202151 | orchestrator | ok: [testbed-node-3] 2026-01-17 00:31:02.202165 | orchestrator | ok: [testbed-node-2] 2026-01-17 00:31:02.202177 | orchestrator | ok: [testbed-node-5] 2026-01-17 00:31:02.202190 | orchestrator | ok: [testbed-node-0] 2026-01-17 00:31:02.202201 | orchestrator | ok: [testbed-node-4] 2026-01-17 00:31:02.202211 | orchestrator | ok: [testbed-node-1] 2026-01-17 00:31:02.202222 | orchestrator | ok: [testbed-manager] 2026-01-17 00:31:02.202233 | orchestrator | 2026-01-17 00:31:02.202244 | orchestrator | TASK [osism.commons.services : Check services] ********************************* 2026-01-17 00:31:02.202254 | orchestrator | Saturday 17 January 2026 00:30:56 +0000 (0:00:05.425) 0:03:38.439 ****** 2026-01-17 00:31:02.202266 | orchestrator | skipping: [testbed-manager] => (item=nscd)  2026-01-17 00:31:02.202277 | orchestrator | skipping: [testbed-node-3] => (item=nscd)  2026-01-17 00:31:02.202288 | orchestrator | skipping: [testbed-manager] 2026-01-17 00:31:02.202298 | orchestrator | skipping: [testbed-node-4] => (item=nscd)  2026-01-17 00:31:02.202309 | orchestrator | skipping: [testbed-node-3] 2026-01-17 00:31:02.202320 | orchestrator | skipping: [testbed-node-5] => (item=nscd)  2026-01-17 00:31:02.202331 | orchestrator | skipping: [testbed-node-4] 2026-01-17 00:31:02.202341 | orchestrator | skipping: [testbed-node-0] => (item=nscd)  2026-01-17 00:31:02.202352 | orchestrator | skipping: [testbed-node-5] 2026-01-17 00:31:02.202362 | orchestrator | skipping: [testbed-node-1] => (item=nscd)  2026-01-17 00:31:02.202373 | orchestrator | skipping: [testbed-node-0] 2026-01-17 00:31:02.202384 | orchestrator | skipping: [testbed-node-1] 2026-01-17 00:31:02.202409 | orchestrator | skipping: [testbed-node-2] => (item=nscd)  2026-01-17 00:31:02.202420 | orchestrator | skipping: [testbed-node-2] 2026-01-17 00:31:02.202431 | orchestrator | 2026-01-17 00:31:02.202442 | orchestrator | TASK [osism.commons.services : Start/enable required services] ***************** 2026-01-17 00:31:02.202453 | orchestrator | Saturday 17 January 2026 00:30:56 +0000 (0:00:00.332) 0:03:38.772 ****** 2026-01-17 00:31:02.202463 | orchestrator | ok: [testbed-manager] => (item=cron) 2026-01-17 00:31:02.202475 | orchestrator | ok: [testbed-node-4] => (item=cron) 2026-01-17 00:31:02.202485 | orchestrator | ok: [testbed-node-3] => (item=cron) 2026-01-17 00:31:02.202515 | orchestrator | ok: [testbed-node-5] => (item=cron) 2026-01-17 00:31:02.202527 | orchestrator | ok: [testbed-node-0] => (item=cron) 2026-01-17 00:31:02.202537 | orchestrator | ok: [testbed-node-1] => (item=cron) 2026-01-17 00:31:02.202548 | orchestrator | ok: [testbed-node-2] => (item=cron) 2026-01-17 00:31:02.202559 | orchestrator | 2026-01-17 00:31:02.202569 | orchestrator | TASK [osism.commons.motd : Include distribution specific configure tasks] ****** 2026-01-17 00:31:02.202580 | orchestrator | Saturday 17 January 2026 00:30:57 +0000 (0:00:01.071) 0:03:39.843 ****** 2026-01-17 00:31:02.202593 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/motd/tasks/configure-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-17 00:31:02.202607 | orchestrator | 2026-01-17 00:31:02.202627 | orchestrator | TASK [osism.commons.motd : Remove update-motd package] ************************* 2026-01-17 00:31:02.202638 | orchestrator | Saturday 17 January 2026 00:30:57 +0000 (0:00:00.552) 0:03:40.396 ****** 2026-01-17 00:31:02.202649 | orchestrator | ok: [testbed-manager] 2026-01-17 00:31:02.202660 | orchestrator | ok: [testbed-node-4] 2026-01-17 00:31:02.202671 | orchestrator | ok: [testbed-node-3] 2026-01-17 00:31:02.202681 | orchestrator | ok: [testbed-node-0] 2026-01-17 00:31:02.202692 | orchestrator | ok: [testbed-node-5] 2026-01-17 00:31:02.202703 | orchestrator | ok: [testbed-node-1] 2026-01-17 00:31:02.202713 | orchestrator | ok: [testbed-node-2] 2026-01-17 00:31:02.202724 | orchestrator | 2026-01-17 00:31:02.202761 | orchestrator | TASK [osism.commons.motd : Check if /etc/default/motd-news exists] ************* 2026-01-17 00:31:02.202774 | orchestrator | Saturday 17 January 2026 00:30:59 +0000 (0:00:01.268) 0:03:41.664 ****** 2026-01-17 00:31:02.202785 | orchestrator | ok: [testbed-manager] 2026-01-17 00:31:02.202795 | orchestrator | ok: [testbed-node-3] 2026-01-17 00:31:02.202806 | orchestrator | ok: [testbed-node-4] 2026-01-17 00:31:02.202816 | orchestrator | ok: [testbed-node-5] 2026-01-17 00:31:02.202827 | orchestrator | ok: [testbed-node-0] 2026-01-17 00:31:02.202837 | orchestrator | ok: [testbed-node-1] 2026-01-17 00:31:02.202848 | orchestrator | ok: [testbed-node-2] 2026-01-17 00:31:02.202859 | orchestrator | 2026-01-17 00:31:02.202869 | orchestrator | TASK [osism.commons.motd : Disable the dynamic motd-news service] ************** 2026-01-17 00:31:02.202880 | orchestrator | Saturday 17 January 2026 00:30:59 +0000 (0:00:00.649) 0:03:42.314 ****** 2026-01-17 00:31:02.202891 | orchestrator | changed: [testbed-manager] 2026-01-17 00:31:02.202902 | orchestrator | changed: [testbed-node-3] 2026-01-17 00:31:02.202912 | orchestrator | changed: [testbed-node-5] 2026-01-17 00:31:02.202923 | orchestrator | changed: [testbed-node-4] 2026-01-17 00:31:02.202933 | orchestrator | changed: [testbed-node-0] 2026-01-17 00:31:02.202944 | orchestrator | changed: [testbed-node-1] 2026-01-17 00:31:02.202954 | orchestrator | changed: [testbed-node-2] 2026-01-17 00:31:02.202965 | orchestrator | 2026-01-17 00:31:02.202976 | orchestrator | TASK [osism.commons.motd : Get all configuration files in /etc/pam.d] ********** 2026-01-17 00:31:02.202986 | orchestrator | Saturday 17 January 2026 00:31:00 +0000 (0:00:00.634) 0:03:42.948 ****** 2026-01-17 00:31:02.202997 | orchestrator | ok: [testbed-node-5] 2026-01-17 00:31:02.203007 | orchestrator | ok: [testbed-manager] 2026-01-17 00:31:02.203018 | orchestrator | ok: [testbed-node-1] 2026-01-17 00:31:02.203029 | orchestrator | ok: [testbed-node-3] 2026-01-17 00:31:02.203039 | orchestrator | ok: [testbed-node-4] 2026-01-17 00:31:02.203050 | orchestrator | ok: [testbed-node-0] 2026-01-17 00:31:02.203060 | orchestrator | ok: [testbed-node-2] 2026-01-17 00:31:02.203071 | orchestrator | 2026-01-17 00:31:02.203082 | orchestrator | TASK [osism.commons.motd : Remove pam_motd.so rule] **************************** 2026-01-17 00:31:02.203093 | orchestrator | Saturday 17 January 2026 00:31:01 +0000 (0:00:00.626) 0:03:43.575 ****** 2026-01-17 00:31:02.203109 | orchestrator | changed: [testbed-manager] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1768608259.818, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-17 00:31:02.203123 | orchestrator | changed: [testbed-node-4] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1768608323.1897156, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-17 00:31:02.203143 | orchestrator | changed: [testbed-node-0] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1768608286.839364, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-17 00:31:02.203187 | orchestrator | changed: [testbed-node-1] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1768608305.9169195, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-17 00:31:07.249663 | orchestrator | changed: [testbed-node-2] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1768608300.4354522, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-17 00:31:07.249834 | orchestrator | changed: [testbed-node-3] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1768608319.4168518, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-17 00:31:07.249849 | orchestrator | changed: [testbed-node-5] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1768608313.3078284, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-17 00:31:07.249856 | orchestrator | changed: [testbed-manager] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-17 00:31:07.249863 | orchestrator | changed: [testbed-node-4] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-17 00:31:07.249903 | orchestrator | changed: [testbed-node-2] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-17 00:31:07.249940 | orchestrator | changed: [testbed-node-0] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-17 00:31:07.249963 | orchestrator | changed: [testbed-node-1] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-17 00:31:07.249970 | orchestrator | changed: [testbed-node-3] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-17 00:31:07.249976 | orchestrator | changed: [testbed-node-5] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-17 00:31:07.249982 | orchestrator | 2026-01-17 00:31:07.249990 | orchestrator | TASK [osism.commons.motd : Copy motd file] ************************************* 2026-01-17 00:31:07.249997 | orchestrator | Saturday 17 January 2026 00:31:02 +0000 (0:00:01.038) 0:03:44.613 ****** 2026-01-17 00:31:07.250003 | orchestrator | changed: [testbed-manager] 2026-01-17 00:31:07.250010 | orchestrator | changed: [testbed-node-3] 2026-01-17 00:31:07.250055 | orchestrator | changed: [testbed-node-4] 2026-01-17 00:31:07.250061 | orchestrator | changed: [testbed-node-5] 2026-01-17 00:31:07.250067 | orchestrator | changed: [testbed-node-0] 2026-01-17 00:31:07.250073 | orchestrator | changed: [testbed-node-1] 2026-01-17 00:31:07.250080 | orchestrator | changed: [testbed-node-2] 2026-01-17 00:31:07.250086 | orchestrator | 2026-01-17 00:31:07.250094 | orchestrator | TASK [osism.commons.motd : Copy issue file] ************************************ 2026-01-17 00:31:07.250101 | orchestrator | Saturday 17 January 2026 00:31:03 +0000 (0:00:01.181) 0:03:45.795 ****** 2026-01-17 00:31:07.250108 | orchestrator | changed: [testbed-manager] 2026-01-17 00:31:07.250114 | orchestrator | changed: [testbed-node-3] 2026-01-17 00:31:07.250129 | orchestrator | changed: [testbed-node-4] 2026-01-17 00:31:07.250135 | orchestrator | changed: [testbed-node-0] 2026-01-17 00:31:07.250141 | orchestrator | changed: [testbed-node-5] 2026-01-17 00:31:07.250148 | orchestrator | changed: [testbed-node-1] 2026-01-17 00:31:07.250154 | orchestrator | changed: [testbed-node-2] 2026-01-17 00:31:07.250161 | orchestrator | 2026-01-17 00:31:07.250168 | orchestrator | TASK [osism.commons.motd : Copy issue.net file] ******************************** 2026-01-17 00:31:07.250175 | orchestrator | Saturday 17 January 2026 00:31:04 +0000 (0:00:01.191) 0:03:46.987 ****** 2026-01-17 00:31:07.250182 | orchestrator | changed: [testbed-manager] 2026-01-17 00:31:07.250189 | orchestrator | changed: [testbed-node-4] 2026-01-17 00:31:07.250199 | orchestrator | changed: [testbed-node-3] 2026-01-17 00:31:07.250228 | orchestrator | changed: [testbed-node-5] 2026-01-17 00:31:07.250236 | orchestrator | changed: [testbed-node-2] 2026-01-17 00:31:07.250247 | orchestrator | changed: [testbed-node-0] 2026-01-17 00:31:07.250255 | orchestrator | changed: [testbed-node-1] 2026-01-17 00:31:07.250264 | orchestrator | 2026-01-17 00:31:07.250274 | orchestrator | TASK [osism.commons.motd : Configure SSH to print the motd] ******************** 2026-01-17 00:31:07.250283 | orchestrator | Saturday 17 January 2026 00:31:05 +0000 (0:00:01.175) 0:03:48.162 ****** 2026-01-17 00:31:07.250291 | orchestrator | skipping: [testbed-manager] 2026-01-17 00:31:07.250300 | orchestrator | skipping: [testbed-node-3] 2026-01-17 00:31:07.250310 | orchestrator | skipping: [testbed-node-4] 2026-01-17 00:31:07.250320 | orchestrator | skipping: [testbed-node-5] 2026-01-17 00:31:07.250329 | orchestrator | skipping: [testbed-node-0] 2026-01-17 00:31:07.250338 | orchestrator | skipping: [testbed-node-1] 2026-01-17 00:31:07.250349 | orchestrator | skipping: [testbed-node-2] 2026-01-17 00:31:07.250358 | orchestrator | 2026-01-17 00:31:07.250373 | orchestrator | TASK [osism.commons.motd : Configure SSH to not print the motd] **************** 2026-01-17 00:31:07.250384 | orchestrator | Saturday 17 January 2026 00:31:06 +0000 (0:00:00.307) 0:03:48.470 ****** 2026-01-17 00:31:07.250392 | orchestrator | ok: [testbed-manager] 2026-01-17 00:31:07.250402 | orchestrator | ok: [testbed-node-3] 2026-01-17 00:31:07.250411 | orchestrator | ok: [testbed-node-4] 2026-01-17 00:31:07.250420 | orchestrator | ok: [testbed-node-5] 2026-01-17 00:31:07.250430 | orchestrator | ok: [testbed-node-0] 2026-01-17 00:31:07.250439 | orchestrator | ok: [testbed-node-1] 2026-01-17 00:31:07.250448 | orchestrator | ok: [testbed-node-2] 2026-01-17 00:31:07.250456 | orchestrator | 2026-01-17 00:31:07.250463 | orchestrator | TASK [osism.services.rng : Include distribution specific install tasks] ******** 2026-01-17 00:31:07.250472 | orchestrator | Saturday 17 January 2026 00:31:06 +0000 (0:00:00.761) 0:03:49.232 ****** 2026-01-17 00:31:07.250482 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rng/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-17 00:31:07.250493 | orchestrator | 2026-01-17 00:31:07.250502 | orchestrator | TASK [osism.services.rng : Install rng package] ******************************** 2026-01-17 00:31:07.250519 | orchestrator | Saturday 17 January 2026 00:31:07 +0000 (0:00:00.429) 0:03:49.662 ****** 2026-01-17 00:32:24.655500 | orchestrator | ok: [testbed-manager] 2026-01-17 00:32:24.655633 | orchestrator | changed: [testbed-node-0] 2026-01-17 00:32:24.655728 | orchestrator | changed: [testbed-node-3] 2026-01-17 00:32:24.655741 | orchestrator | changed: [testbed-node-5] 2026-01-17 00:32:24.655754 | orchestrator | changed: [testbed-node-2] 2026-01-17 00:32:24.655771 | orchestrator | changed: [testbed-node-1] 2026-01-17 00:32:24.655781 | orchestrator | changed: [testbed-node-4] 2026-01-17 00:32:24.655791 | orchestrator | 2026-01-17 00:32:24.655804 | orchestrator | TASK [osism.services.rng : Remove haveged package] ***************************** 2026-01-17 00:32:24.655822 | orchestrator | Saturday 17 January 2026 00:31:15 +0000 (0:00:08.435) 0:03:58.097 ****** 2026-01-17 00:32:24.655838 | orchestrator | ok: [testbed-manager] 2026-01-17 00:32:24.655854 | orchestrator | ok: [testbed-node-4] 2026-01-17 00:32:24.655870 | orchestrator | ok: [testbed-node-1] 2026-01-17 00:32:24.655922 | orchestrator | ok: [testbed-node-3] 2026-01-17 00:32:24.655934 | orchestrator | ok: [testbed-node-5] 2026-01-17 00:32:24.655943 | orchestrator | ok: [testbed-node-0] 2026-01-17 00:32:24.655952 | orchestrator | ok: [testbed-node-2] 2026-01-17 00:32:24.655962 | orchestrator | 2026-01-17 00:32:24.655971 | orchestrator | TASK [osism.services.rng : Manage rng service] ********************************* 2026-01-17 00:32:24.655981 | orchestrator | Saturday 17 January 2026 00:31:16 +0000 (0:00:01.259) 0:03:59.357 ****** 2026-01-17 00:32:24.655991 | orchestrator | ok: [testbed-manager] 2026-01-17 00:32:24.656001 | orchestrator | ok: [testbed-node-3] 2026-01-17 00:32:24.656010 | orchestrator | ok: [testbed-node-4] 2026-01-17 00:32:24.656020 | orchestrator | ok: [testbed-node-0] 2026-01-17 00:32:24.656029 | orchestrator | ok: [testbed-node-1] 2026-01-17 00:32:24.656039 | orchestrator | ok: [testbed-node-2] 2026-01-17 00:32:24.656050 | orchestrator | ok: [testbed-node-5] 2026-01-17 00:32:24.656061 | orchestrator | 2026-01-17 00:32:24.656072 | orchestrator | TASK [osism.commons.cleanup : Gather variables for each operating system] ****** 2026-01-17 00:32:24.656084 | orchestrator | Saturday 17 January 2026 00:31:18 +0000 (0:00:01.177) 0:04:00.534 ****** 2026-01-17 00:32:24.656094 | orchestrator | ok: [testbed-manager] 2026-01-17 00:32:24.656105 | orchestrator | ok: [testbed-node-3] 2026-01-17 00:32:24.656115 | orchestrator | ok: [testbed-node-4] 2026-01-17 00:32:24.656126 | orchestrator | ok: [testbed-node-5] 2026-01-17 00:32:24.656136 | orchestrator | ok: [testbed-node-0] 2026-01-17 00:32:24.656146 | orchestrator | ok: [testbed-node-1] 2026-01-17 00:32:24.656157 | orchestrator | ok: [testbed-node-2] 2026-01-17 00:32:24.656167 | orchestrator | 2026-01-17 00:32:24.656178 | orchestrator | TASK [osism.commons.cleanup : Set cleanup_packages_distribution variable to default value] *** 2026-01-17 00:32:24.656191 | orchestrator | Saturday 17 January 2026 00:31:18 +0000 (0:00:00.321) 0:04:00.856 ****** 2026-01-17 00:32:24.656202 | orchestrator | ok: [testbed-manager] 2026-01-17 00:32:24.656212 | orchestrator | ok: [testbed-node-3] 2026-01-17 00:32:24.656223 | orchestrator | ok: [testbed-node-4] 2026-01-17 00:32:24.656234 | orchestrator | ok: [testbed-node-5] 2026-01-17 00:32:24.656245 | orchestrator | ok: [testbed-node-0] 2026-01-17 00:32:24.656256 | orchestrator | ok: [testbed-node-1] 2026-01-17 00:32:24.656266 | orchestrator | ok: [testbed-node-2] 2026-01-17 00:32:24.656276 | orchestrator | 2026-01-17 00:32:24.656285 | orchestrator | TASK [osism.commons.cleanup : Set cleanup_services_distribution variable to default value] *** 2026-01-17 00:32:24.656295 | orchestrator | Saturday 17 January 2026 00:31:18 +0000 (0:00:00.364) 0:04:01.221 ****** 2026-01-17 00:32:24.656304 | orchestrator | ok: [testbed-manager] 2026-01-17 00:32:24.656314 | orchestrator | ok: [testbed-node-3] 2026-01-17 00:32:24.656323 | orchestrator | ok: [testbed-node-4] 2026-01-17 00:32:24.656332 | orchestrator | ok: [testbed-node-5] 2026-01-17 00:32:24.656341 | orchestrator | ok: [testbed-node-0] 2026-01-17 00:32:24.656351 | orchestrator | ok: [testbed-node-1] 2026-01-17 00:32:24.656360 | orchestrator | ok: [testbed-node-2] 2026-01-17 00:32:24.656369 | orchestrator | 2026-01-17 00:32:24.656379 | orchestrator | TASK [osism.commons.cleanup : Populate service facts] ************************** 2026-01-17 00:32:24.656388 | orchestrator | Saturday 17 January 2026 00:31:19 +0000 (0:00:00.293) 0:04:01.514 ****** 2026-01-17 00:32:24.656398 | orchestrator | ok: [testbed-manager] 2026-01-17 00:32:24.656407 | orchestrator | ok: [testbed-node-0] 2026-01-17 00:32:24.656417 | orchestrator | ok: [testbed-node-3] 2026-01-17 00:32:24.656426 | orchestrator | ok: [testbed-node-5] 2026-01-17 00:32:24.656436 | orchestrator | ok: [testbed-node-4] 2026-01-17 00:32:24.656445 | orchestrator | ok: [testbed-node-1] 2026-01-17 00:32:24.656454 | orchestrator | ok: [testbed-node-2] 2026-01-17 00:32:24.656463 | orchestrator | 2026-01-17 00:32:24.656473 | orchestrator | TASK [osism.commons.cleanup : Include distribution specific timer tasks] ******* 2026-01-17 00:32:24.656482 | orchestrator | Saturday 17 January 2026 00:31:24 +0000 (0:00:05.267) 0:04:06.781 ****** 2026-01-17 00:32:24.656494 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/timers-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-17 00:32:24.656515 | orchestrator | 2026-01-17 00:32:24.656525 | orchestrator | TASK [osism.commons.cleanup : Disable apt-daily timers] ************************ 2026-01-17 00:32:24.656549 | orchestrator | Saturday 17 January 2026 00:31:24 +0000 (0:00:00.398) 0:04:07.180 ****** 2026-01-17 00:32:24.656559 | orchestrator | skipping: [testbed-manager] => (item=apt-daily-upgrade)  2026-01-17 00:32:24.656568 | orchestrator | skipping: [testbed-manager] => (item=apt-daily)  2026-01-17 00:32:24.656578 | orchestrator | skipping: [testbed-node-3] => (item=apt-daily-upgrade)  2026-01-17 00:32:24.656588 | orchestrator | skipping: [testbed-node-3] => (item=apt-daily)  2026-01-17 00:32:24.656597 | orchestrator | skipping: [testbed-manager] 2026-01-17 00:32:24.656607 | orchestrator | skipping: [testbed-node-4] => (item=apt-daily-upgrade)  2026-01-17 00:32:24.656616 | orchestrator | skipping: [testbed-node-4] => (item=apt-daily)  2026-01-17 00:32:24.656625 | orchestrator | skipping: [testbed-node-3] 2026-01-17 00:32:24.656635 | orchestrator | skipping: [testbed-node-5] => (item=apt-daily-upgrade)  2026-01-17 00:32:24.656672 | orchestrator | skipping: [testbed-node-4] 2026-01-17 00:32:24.656685 | orchestrator | skipping: [testbed-node-5] => (item=apt-daily)  2026-01-17 00:32:24.656694 | orchestrator | skipping: [testbed-node-5] 2026-01-17 00:32:24.656704 | orchestrator | skipping: [testbed-node-0] => (item=apt-daily-upgrade)  2026-01-17 00:32:24.656713 | orchestrator | skipping: [testbed-node-0] => (item=apt-daily)  2026-01-17 00:32:24.656723 | orchestrator | skipping: [testbed-node-1] => (item=apt-daily-upgrade)  2026-01-17 00:32:24.656738 | orchestrator | skipping: [testbed-node-1] => (item=apt-daily)  2026-01-17 00:32:24.656774 | orchestrator | skipping: [testbed-node-0] 2026-01-17 00:32:24.656791 | orchestrator | skipping: [testbed-node-1] 2026-01-17 00:32:24.656806 | orchestrator | skipping: [testbed-node-2] => (item=apt-daily-upgrade)  2026-01-17 00:32:24.656823 | orchestrator | skipping: [testbed-node-2] => (item=apt-daily)  2026-01-17 00:32:24.656841 | orchestrator | skipping: [testbed-node-2] 2026-01-17 00:32:24.656858 | orchestrator | 2026-01-17 00:32:24.656876 | orchestrator | TASK [osism.commons.cleanup : Include service tasks] *************************** 2026-01-17 00:32:24.656887 | orchestrator | Saturday 17 January 2026 00:31:25 +0000 (0:00:00.369) 0:04:07.549 ****** 2026-01-17 00:32:24.656897 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/services-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-17 00:32:24.656907 | orchestrator | 2026-01-17 00:32:24.656916 | orchestrator | TASK [osism.commons.cleanup : Cleanup services] ******************************** 2026-01-17 00:32:24.656926 | orchestrator | Saturday 17 January 2026 00:31:25 +0000 (0:00:00.454) 0:04:08.004 ****** 2026-01-17 00:32:24.656935 | orchestrator | skipping: [testbed-manager] => (item=ModemManager.service)  2026-01-17 00:32:24.656945 | orchestrator | skipping: [testbed-node-3] => (item=ModemManager.service)  2026-01-17 00:32:24.656954 | orchestrator | skipping: [testbed-manager] 2026-01-17 00:32:24.656964 | orchestrator | skipping: [testbed-node-3] 2026-01-17 00:32:24.656973 | orchestrator | skipping: [testbed-node-4] => (item=ModemManager.service)  2026-01-17 00:32:24.656982 | orchestrator | skipping: [testbed-node-4] 2026-01-17 00:32:24.656992 | orchestrator | skipping: [testbed-node-5] => (item=ModemManager.service)  2026-01-17 00:32:24.657001 | orchestrator | skipping: [testbed-node-0] => (item=ModemManager.service)  2026-01-17 00:32:24.657010 | orchestrator | skipping: [testbed-node-5] 2026-01-17 00:32:24.657020 | orchestrator | skipping: [testbed-node-1] => (item=ModemManager.service)  2026-01-17 00:32:24.657029 | orchestrator | skipping: [testbed-node-0] 2026-01-17 00:32:24.657039 | orchestrator | skipping: [testbed-node-1] 2026-01-17 00:32:24.657048 | orchestrator | skipping: [testbed-node-2] => (item=ModemManager.service)  2026-01-17 00:32:24.657057 | orchestrator | skipping: [testbed-node-2] 2026-01-17 00:32:24.657067 | orchestrator | 2026-01-17 00:32:24.657084 | orchestrator | TASK [osism.commons.cleanup : Include packages tasks] ************************** 2026-01-17 00:32:24.657093 | orchestrator | Saturday 17 January 2026 00:31:25 +0000 (0:00:00.325) 0:04:08.329 ****** 2026-01-17 00:32:24.657103 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/packages-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-17 00:32:24.657113 | orchestrator | 2026-01-17 00:32:24.657122 | orchestrator | TASK [osism.commons.cleanup : Cleanup installed packages] ********************** 2026-01-17 00:32:24.657132 | orchestrator | Saturday 17 January 2026 00:31:26 +0000 (0:00:00.520) 0:04:08.849 ****** 2026-01-17 00:32:24.657141 | orchestrator | changed: [testbed-node-1] 2026-01-17 00:32:24.657150 | orchestrator | changed: [testbed-node-0] 2026-01-17 00:32:24.657160 | orchestrator | changed: [testbed-node-4] 2026-01-17 00:32:24.657169 | orchestrator | changed: [testbed-node-2] 2026-01-17 00:32:24.657179 | orchestrator | changed: [testbed-node-5] 2026-01-17 00:32:24.657188 | orchestrator | changed: [testbed-node-3] 2026-01-17 00:32:24.657197 | orchestrator | changed: [testbed-manager] 2026-01-17 00:32:24.657207 | orchestrator | 2026-01-17 00:32:24.657216 | orchestrator | TASK [osism.commons.cleanup : Remove cloudinit package] ************************ 2026-01-17 00:32:24.657226 | orchestrator | Saturday 17 January 2026 00:31:59 +0000 (0:00:33.401) 0:04:42.251 ****** 2026-01-17 00:32:24.657235 | orchestrator | changed: [testbed-manager] 2026-01-17 00:32:24.657244 | orchestrator | changed: [testbed-node-4] 2026-01-17 00:32:24.657254 | orchestrator | changed: [testbed-node-0] 2026-01-17 00:32:24.657263 | orchestrator | changed: [testbed-node-1] 2026-01-17 00:32:24.657272 | orchestrator | changed: [testbed-node-5] 2026-01-17 00:32:24.657282 | orchestrator | changed: [testbed-node-2] 2026-01-17 00:32:24.657291 | orchestrator | changed: [testbed-node-3] 2026-01-17 00:32:24.657300 | orchestrator | 2026-01-17 00:32:24.657310 | orchestrator | TASK [osism.commons.cleanup : Uninstall unattended-upgrades package] *********** 2026-01-17 00:32:24.657319 | orchestrator | Saturday 17 January 2026 00:32:08 +0000 (0:00:08.818) 0:04:51.069 ****** 2026-01-17 00:32:24.657329 | orchestrator | changed: [testbed-manager] 2026-01-17 00:32:24.657338 | orchestrator | changed: [testbed-node-4] 2026-01-17 00:32:24.657347 | orchestrator | changed: [testbed-node-5] 2026-01-17 00:32:24.657357 | orchestrator | changed: [testbed-node-2] 2026-01-17 00:32:24.657366 | orchestrator | changed: [testbed-node-1] 2026-01-17 00:32:24.657376 | orchestrator | changed: [testbed-node-0] 2026-01-17 00:32:24.657385 | orchestrator | changed: [testbed-node-3] 2026-01-17 00:32:24.657394 | orchestrator | 2026-01-17 00:32:24.657404 | orchestrator | TASK [osism.commons.cleanup : Remove useless packages from the cache] ********** 2026-01-17 00:32:24.657414 | orchestrator | Saturday 17 January 2026 00:32:16 +0000 (0:00:07.808) 0:04:58.878 ****** 2026-01-17 00:32:24.657423 | orchestrator | ok: [testbed-manager] 2026-01-17 00:32:24.657433 | orchestrator | ok: [testbed-node-3] 2026-01-17 00:32:24.657442 | orchestrator | ok: [testbed-node-5] 2026-01-17 00:32:24.657451 | orchestrator | ok: [testbed-node-0] 2026-01-17 00:32:24.657461 | orchestrator | ok: [testbed-node-2] 2026-01-17 00:32:24.657470 | orchestrator | ok: [testbed-node-1] 2026-01-17 00:32:24.657480 | orchestrator | ok: [testbed-node-4] 2026-01-17 00:32:24.657489 | orchestrator | 2026-01-17 00:32:24.657499 | orchestrator | TASK [osism.commons.cleanup : Remove dependencies that are no longer required] *** 2026-01-17 00:32:24.657508 | orchestrator | Saturday 17 January 2026 00:32:18 +0000 (0:00:01.945) 0:05:00.824 ****** 2026-01-17 00:32:24.657518 | orchestrator | changed: [testbed-manager] 2026-01-17 00:32:24.657527 | orchestrator | changed: [testbed-node-4] 2026-01-17 00:32:24.657537 | orchestrator | changed: [testbed-node-2] 2026-01-17 00:32:24.657546 | orchestrator | changed: [testbed-node-5] 2026-01-17 00:32:24.657555 | orchestrator | changed: [testbed-node-3] 2026-01-17 00:32:24.657565 | orchestrator | changed: [testbed-node-1] 2026-01-17 00:32:24.657574 | orchestrator | changed: [testbed-node-0] 2026-01-17 00:32:24.657584 | orchestrator | 2026-01-17 00:32:24.657601 | orchestrator | TASK [osism.commons.cleanup : Include cloudinit tasks] ************************* 2026-01-17 00:32:36.385532 | orchestrator | Saturday 17 January 2026 00:32:24 +0000 (0:00:06.246) 0:05:07.070 ****** 2026-01-17 00:32:36.385772 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/cloudinit.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-17 00:32:36.385804 | orchestrator | 2026-01-17 00:32:36.385826 | orchestrator | TASK [osism.commons.cleanup : Remove cloud-init configuration directory] ******* 2026-01-17 00:32:36.385871 | orchestrator | Saturday 17 January 2026 00:32:25 +0000 (0:00:00.559) 0:05:07.629 ****** 2026-01-17 00:32:36.385895 | orchestrator | changed: [testbed-manager] 2026-01-17 00:32:36.385916 | orchestrator | changed: [testbed-node-3] 2026-01-17 00:32:36.385937 | orchestrator | changed: [testbed-node-4] 2026-01-17 00:32:36.385957 | orchestrator | changed: [testbed-node-5] 2026-01-17 00:32:36.385976 | orchestrator | changed: [testbed-node-0] 2026-01-17 00:32:36.385996 | orchestrator | changed: [testbed-node-1] 2026-01-17 00:32:36.386078 | orchestrator | changed: [testbed-node-2] 2026-01-17 00:32:36.386102 | orchestrator | 2026-01-17 00:32:36.386122 | orchestrator | TASK [osism.commons.timezone : Install tzdata package] ************************* 2026-01-17 00:32:36.386144 | orchestrator | Saturday 17 January 2026 00:32:26 +0000 (0:00:00.800) 0:05:08.429 ****** 2026-01-17 00:32:36.386165 | orchestrator | ok: [testbed-manager] 2026-01-17 00:32:36.386186 | orchestrator | ok: [testbed-node-3] 2026-01-17 00:32:36.386206 | orchestrator | ok: [testbed-node-5] 2026-01-17 00:32:36.386227 | orchestrator | ok: [testbed-node-4] 2026-01-17 00:32:36.386247 | orchestrator | ok: [testbed-node-0] 2026-01-17 00:32:36.386269 | orchestrator | ok: [testbed-node-2] 2026-01-17 00:32:36.386289 | orchestrator | ok: [testbed-node-1] 2026-01-17 00:32:36.386309 | orchestrator | 2026-01-17 00:32:36.386329 | orchestrator | TASK [osism.commons.timezone : Set timezone to UTC] **************************** 2026-01-17 00:32:36.386348 | orchestrator | Saturday 17 January 2026 00:32:27 +0000 (0:00:01.759) 0:05:10.189 ****** 2026-01-17 00:32:36.386369 | orchestrator | changed: [testbed-node-4] 2026-01-17 00:32:36.386389 | orchestrator | changed: [testbed-node-0] 2026-01-17 00:32:36.386408 | orchestrator | changed: [testbed-node-3] 2026-01-17 00:32:36.386428 | orchestrator | changed: [testbed-node-1] 2026-01-17 00:32:36.386447 | orchestrator | changed: [testbed-node-5] 2026-01-17 00:32:36.386467 | orchestrator | changed: [testbed-node-2] 2026-01-17 00:32:36.386488 | orchestrator | changed: [testbed-manager] 2026-01-17 00:32:36.386507 | orchestrator | 2026-01-17 00:32:36.386527 | orchestrator | TASK [osism.commons.timezone : Create /etc/adjtime file] *********************** 2026-01-17 00:32:36.386548 | orchestrator | Saturday 17 January 2026 00:32:28 +0000 (0:00:00.873) 0:05:11.062 ****** 2026-01-17 00:32:36.386567 | orchestrator | skipping: [testbed-manager] 2026-01-17 00:32:36.386587 | orchestrator | skipping: [testbed-node-3] 2026-01-17 00:32:36.386607 | orchestrator | skipping: [testbed-node-4] 2026-01-17 00:32:36.386647 | orchestrator | skipping: [testbed-node-5] 2026-01-17 00:32:36.386668 | orchestrator | skipping: [testbed-node-0] 2026-01-17 00:32:36.386687 | orchestrator | skipping: [testbed-node-1] 2026-01-17 00:32:36.386705 | orchestrator | skipping: [testbed-node-2] 2026-01-17 00:32:36.386724 | orchestrator | 2026-01-17 00:32:36.386742 | orchestrator | TASK [osism.commons.timezone : Ensure UTC in /etc/adjtime] ********************* 2026-01-17 00:32:36.386761 | orchestrator | Saturday 17 January 2026 00:32:28 +0000 (0:00:00.314) 0:05:11.377 ****** 2026-01-17 00:32:36.386772 | orchestrator | skipping: [testbed-manager] 2026-01-17 00:32:36.386783 | orchestrator | skipping: [testbed-node-3] 2026-01-17 00:32:36.386793 | orchestrator | skipping: [testbed-node-4] 2026-01-17 00:32:36.386804 | orchestrator | skipping: [testbed-node-5] 2026-01-17 00:32:36.386815 | orchestrator | skipping: [testbed-node-0] 2026-01-17 00:32:36.386825 | orchestrator | skipping: [testbed-node-1] 2026-01-17 00:32:36.386843 | orchestrator | skipping: [testbed-node-2] 2026-01-17 00:32:36.386861 | orchestrator | 2026-01-17 00:32:36.386880 | orchestrator | TASK [osism.services.docker : Gather variables for each operating system] ****** 2026-01-17 00:32:36.386931 | orchestrator | Saturday 17 January 2026 00:32:29 +0000 (0:00:00.438) 0:05:11.815 ****** 2026-01-17 00:32:36.386949 | orchestrator | ok: [testbed-manager] 2026-01-17 00:32:36.386960 | orchestrator | ok: [testbed-node-3] 2026-01-17 00:32:36.386977 | orchestrator | ok: [testbed-node-4] 2026-01-17 00:32:36.386995 | orchestrator | ok: [testbed-node-5] 2026-01-17 00:32:36.387014 | orchestrator | ok: [testbed-node-0] 2026-01-17 00:32:36.387032 | orchestrator | ok: [testbed-node-1] 2026-01-17 00:32:36.387051 | orchestrator | ok: [testbed-node-2] 2026-01-17 00:32:36.387069 | orchestrator | 2026-01-17 00:32:36.387088 | orchestrator | TASK [osism.services.docker : Set docker_version variable to default value] **** 2026-01-17 00:32:36.387103 | orchestrator | Saturday 17 January 2026 00:32:29 +0000 (0:00:00.314) 0:05:12.130 ****** 2026-01-17 00:32:36.387114 | orchestrator | skipping: [testbed-manager] 2026-01-17 00:32:36.387124 | orchestrator | skipping: [testbed-node-3] 2026-01-17 00:32:36.387137 | orchestrator | skipping: [testbed-node-4] 2026-01-17 00:32:36.387164 | orchestrator | skipping: [testbed-node-5] 2026-01-17 00:32:36.387183 | orchestrator | skipping: [testbed-node-0] 2026-01-17 00:32:36.387201 | orchestrator | skipping: [testbed-node-1] 2026-01-17 00:32:36.387242 | orchestrator | skipping: [testbed-node-2] 2026-01-17 00:32:36.387253 | orchestrator | 2026-01-17 00:32:36.387263 | orchestrator | TASK [osism.services.docker : Set docker_cli_version variable to default value] *** 2026-01-17 00:32:36.387273 | orchestrator | Saturday 17 January 2026 00:32:30 +0000 (0:00:00.302) 0:05:12.433 ****** 2026-01-17 00:32:36.387283 | orchestrator | ok: [testbed-manager] 2026-01-17 00:32:36.387292 | orchestrator | ok: [testbed-node-3] 2026-01-17 00:32:36.387301 | orchestrator | ok: [testbed-node-4] 2026-01-17 00:32:36.387311 | orchestrator | ok: [testbed-node-5] 2026-01-17 00:32:36.387320 | orchestrator | ok: [testbed-node-0] 2026-01-17 00:32:36.387329 | orchestrator | ok: [testbed-node-1] 2026-01-17 00:32:36.387339 | orchestrator | ok: [testbed-node-2] 2026-01-17 00:32:36.387348 | orchestrator | 2026-01-17 00:32:36.387358 | orchestrator | TASK [osism.services.docker : Print used docker version] *********************** 2026-01-17 00:32:36.387367 | orchestrator | Saturday 17 January 2026 00:32:30 +0000 (0:00:00.336) 0:05:12.769 ****** 2026-01-17 00:32:36.387376 | orchestrator | ok: [testbed-manager] =>  2026-01-17 00:32:36.387386 | orchestrator |  docker_version: 5:27.5.1 2026-01-17 00:32:36.387395 | orchestrator | ok: [testbed-node-3] =>  2026-01-17 00:32:36.387404 | orchestrator |  docker_version: 5:27.5.1 2026-01-17 00:32:36.387414 | orchestrator | ok: [testbed-node-4] =>  2026-01-17 00:32:36.387423 | orchestrator |  docker_version: 5:27.5.1 2026-01-17 00:32:36.387433 | orchestrator | ok: [testbed-node-5] =>  2026-01-17 00:32:36.387442 | orchestrator |  docker_version: 5:27.5.1 2026-01-17 00:32:36.387472 | orchestrator | ok: [testbed-node-0] =>  2026-01-17 00:32:36.387483 | orchestrator |  docker_version: 5:27.5.1 2026-01-17 00:32:36.387493 | orchestrator | ok: [testbed-node-1] =>  2026-01-17 00:32:36.387502 | orchestrator |  docker_version: 5:27.5.1 2026-01-17 00:32:36.387512 | orchestrator | ok: [testbed-node-2] =>  2026-01-17 00:32:36.387521 | orchestrator |  docker_version: 5:27.5.1 2026-01-17 00:32:36.387531 | orchestrator | 2026-01-17 00:32:36.387540 | orchestrator | TASK [osism.services.docker : Print used docker cli version] ******************* 2026-01-17 00:32:36.387550 | orchestrator | Saturday 17 January 2026 00:32:30 +0000 (0:00:00.279) 0:05:13.049 ****** 2026-01-17 00:32:36.387566 | orchestrator | ok: [testbed-manager] =>  2026-01-17 00:32:36.387582 | orchestrator |  docker_cli_version: 5:27.5.1 2026-01-17 00:32:36.387598 | orchestrator | ok: [testbed-node-3] =>  2026-01-17 00:32:36.387614 | orchestrator |  docker_cli_version: 5:27.5.1 2026-01-17 00:32:36.387650 | orchestrator | ok: [testbed-node-4] =>  2026-01-17 00:32:36.387667 | orchestrator |  docker_cli_version: 5:27.5.1 2026-01-17 00:32:36.387684 | orchestrator | ok: [testbed-node-5] =>  2026-01-17 00:32:36.387702 | orchestrator |  docker_cli_version: 5:27.5.1 2026-01-17 00:32:36.387719 | orchestrator | ok: [testbed-node-0] =>  2026-01-17 00:32:36.387735 | orchestrator |  docker_cli_version: 5:27.5.1 2026-01-17 00:32:36.387752 | orchestrator | ok: [testbed-node-1] =>  2026-01-17 00:32:36.387782 | orchestrator |  docker_cli_version: 5:27.5.1 2026-01-17 00:32:36.387798 | orchestrator | ok: [testbed-node-2] =>  2026-01-17 00:32:36.387815 | orchestrator |  docker_cli_version: 5:27.5.1 2026-01-17 00:32:36.387833 | orchestrator | 2026-01-17 00:32:36.387850 | orchestrator | TASK [osism.services.docker : Include block storage tasks] ********************* 2026-01-17 00:32:36.387866 | orchestrator | Saturday 17 January 2026 00:32:30 +0000 (0:00:00.282) 0:05:13.331 ****** 2026-01-17 00:32:36.387883 | orchestrator | skipping: [testbed-manager] 2026-01-17 00:32:36.387900 | orchestrator | skipping: [testbed-node-3] 2026-01-17 00:32:36.387915 | orchestrator | skipping: [testbed-node-4] 2026-01-17 00:32:36.387931 | orchestrator | skipping: [testbed-node-5] 2026-01-17 00:32:36.387948 | orchestrator | skipping: [testbed-node-0] 2026-01-17 00:32:36.387965 | orchestrator | skipping: [testbed-node-1] 2026-01-17 00:32:36.387981 | orchestrator | skipping: [testbed-node-2] 2026-01-17 00:32:36.387998 | orchestrator | 2026-01-17 00:32:36.388015 | orchestrator | TASK [osism.services.docker : Include zram storage tasks] ********************** 2026-01-17 00:32:36.388031 | orchestrator | Saturday 17 January 2026 00:32:31 +0000 (0:00:00.266) 0:05:13.598 ****** 2026-01-17 00:32:36.388047 | orchestrator | skipping: [testbed-manager] 2026-01-17 00:32:36.388063 | orchestrator | skipping: [testbed-node-3] 2026-01-17 00:32:36.388079 | orchestrator | skipping: [testbed-node-4] 2026-01-17 00:32:36.388095 | orchestrator | skipping: [testbed-node-5] 2026-01-17 00:32:36.388111 | orchestrator | skipping: [testbed-node-0] 2026-01-17 00:32:36.388127 | orchestrator | skipping: [testbed-node-1] 2026-01-17 00:32:36.388144 | orchestrator | skipping: [testbed-node-2] 2026-01-17 00:32:36.388160 | orchestrator | 2026-01-17 00:32:36.388177 | orchestrator | TASK [osism.services.docker : Include docker install tasks] ******************** 2026-01-17 00:32:36.388194 | orchestrator | Saturday 17 January 2026 00:32:31 +0000 (0:00:00.282) 0:05:13.881 ****** 2026-01-17 00:32:36.388211 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/install-docker-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-17 00:32:36.388230 | orchestrator | 2026-01-17 00:32:36.388248 | orchestrator | TASK [osism.services.docker : Remove old architecture-dependent repository] **** 2026-01-17 00:32:36.388264 | orchestrator | Saturday 17 January 2026 00:32:31 +0000 (0:00:00.433) 0:05:14.314 ****** 2026-01-17 00:32:36.388279 | orchestrator | ok: [testbed-manager] 2026-01-17 00:32:36.388295 | orchestrator | ok: [testbed-node-5] 2026-01-17 00:32:36.388312 | orchestrator | ok: [testbed-node-4] 2026-01-17 00:32:36.388327 | orchestrator | ok: [testbed-node-0] 2026-01-17 00:32:36.388343 | orchestrator | ok: [testbed-node-3] 2026-01-17 00:32:36.388360 | orchestrator | ok: [testbed-node-2] 2026-01-17 00:32:36.388376 | orchestrator | ok: [testbed-node-1] 2026-01-17 00:32:36.388393 | orchestrator | 2026-01-17 00:32:36.388409 | orchestrator | TASK [osism.services.docker : Gather package facts] **************************** 2026-01-17 00:32:36.388425 | orchestrator | Saturday 17 January 2026 00:32:32 +0000 (0:00:01.047) 0:05:15.361 ****** 2026-01-17 00:32:36.388441 | orchestrator | ok: [testbed-node-0] 2026-01-17 00:32:36.388457 | orchestrator | ok: [testbed-node-4] 2026-01-17 00:32:36.388474 | orchestrator | ok: [testbed-manager] 2026-01-17 00:32:36.388491 | orchestrator | ok: [testbed-node-3] 2026-01-17 00:32:36.388508 | orchestrator | ok: [testbed-node-5] 2026-01-17 00:32:36.388523 | orchestrator | ok: [testbed-node-2] 2026-01-17 00:32:36.388540 | orchestrator | ok: [testbed-node-1] 2026-01-17 00:32:36.388555 | orchestrator | 2026-01-17 00:32:36.388572 | orchestrator | TASK [osism.services.docker : Check whether packages are installed that should not be installed] *** 2026-01-17 00:32:36.388597 | orchestrator | Saturday 17 January 2026 00:32:35 +0000 (0:00:02.956) 0:05:18.317 ****** 2026-01-17 00:32:36.388613 | orchestrator | skipping: [testbed-manager] => (item=containerd)  2026-01-17 00:32:36.388654 | orchestrator | skipping: [testbed-manager] => (item=docker.io)  2026-01-17 00:32:36.388671 | orchestrator | skipping: [testbed-manager] => (item=docker-engine)  2026-01-17 00:32:36.388700 | orchestrator | skipping: [testbed-manager] 2026-01-17 00:32:36.388717 | orchestrator | skipping: [testbed-node-3] => (item=containerd)  2026-01-17 00:32:36.388731 | orchestrator | skipping: [testbed-node-3] => (item=docker.io)  2026-01-17 00:32:36.388746 | orchestrator | skipping: [testbed-node-3] => (item=docker-engine)  2026-01-17 00:32:36.388761 | orchestrator | skipping: [testbed-node-3] 2026-01-17 00:32:36.388778 | orchestrator | skipping: [testbed-node-4] => (item=containerd)  2026-01-17 00:32:36.388795 | orchestrator | skipping: [testbed-node-4] => (item=docker.io)  2026-01-17 00:32:36.388812 | orchestrator | skipping: [testbed-node-4] => (item=docker-engine)  2026-01-17 00:32:36.388829 | orchestrator | skipping: [testbed-node-4] 2026-01-17 00:32:36.388845 | orchestrator | skipping: [testbed-node-5] => (item=containerd)  2026-01-17 00:32:36.388861 | orchestrator | skipping: [testbed-node-5] => (item=docker.io)  2026-01-17 00:32:36.388878 | orchestrator | skipping: [testbed-node-5] => (item=docker-engine)  2026-01-17 00:32:36.388893 | orchestrator | skipping: [testbed-node-5] 2026-01-17 00:32:36.388923 | orchestrator | skipping: [testbed-node-0] => (item=containerd)  2026-01-17 00:33:38.570414 | orchestrator | skipping: [testbed-node-0] => (item=docker.io)  2026-01-17 00:33:38.570532 | orchestrator | skipping: [testbed-node-0] => (item=docker-engine)  2026-01-17 00:33:38.570545 | orchestrator | skipping: [testbed-node-0] 2026-01-17 00:33:38.570553 | orchestrator | skipping: [testbed-node-1] => (item=containerd)  2026-01-17 00:33:38.570559 | orchestrator | skipping: [testbed-node-1] => (item=docker.io)  2026-01-17 00:33:38.570566 | orchestrator | skipping: [testbed-node-1] => (item=docker-engine)  2026-01-17 00:33:38.570573 | orchestrator | skipping: [testbed-node-1] 2026-01-17 00:33:38.570579 | orchestrator | skipping: [testbed-node-2] => (item=containerd)  2026-01-17 00:33:38.570586 | orchestrator | skipping: [testbed-node-2] => (item=docker.io)  2026-01-17 00:33:38.570592 | orchestrator | skipping: [testbed-node-2] => (item=docker-engine)  2026-01-17 00:33:38.570598 | orchestrator | skipping: [testbed-node-2] 2026-01-17 00:33:38.570604 | orchestrator | 2026-01-17 00:33:38.570612 | orchestrator | TASK [osism.services.docker : Install apt-transport-https package] ************* 2026-01-17 00:33:38.570619 | orchestrator | Saturday 17 January 2026 00:32:36 +0000 (0:00:00.709) 0:05:19.026 ****** 2026-01-17 00:33:38.570625 | orchestrator | ok: [testbed-manager] 2026-01-17 00:33:38.570632 | orchestrator | changed: [testbed-node-4] 2026-01-17 00:33:38.570638 | orchestrator | changed: [testbed-node-0] 2026-01-17 00:33:38.570644 | orchestrator | changed: [testbed-node-5] 2026-01-17 00:33:38.570650 | orchestrator | changed: [testbed-node-1] 2026-01-17 00:33:38.570656 | orchestrator | changed: [testbed-node-3] 2026-01-17 00:33:38.570663 | orchestrator | changed: [testbed-node-2] 2026-01-17 00:33:38.570669 | orchestrator | 2026-01-17 00:33:38.570675 | orchestrator | TASK [osism.services.docker : Add repository gpg key] ************************** 2026-01-17 00:33:38.570681 | orchestrator | Saturday 17 January 2026 00:32:43 +0000 (0:00:06.821) 0:05:25.848 ****** 2026-01-17 00:33:38.570687 | orchestrator | changed: [testbed-node-4] 2026-01-17 00:33:38.570693 | orchestrator | ok: [testbed-manager] 2026-01-17 00:33:38.570700 | orchestrator | changed: [testbed-node-3] 2026-01-17 00:33:38.570706 | orchestrator | changed: [testbed-node-5] 2026-01-17 00:33:38.570712 | orchestrator | changed: [testbed-node-0] 2026-01-17 00:33:38.570718 | orchestrator | changed: [testbed-node-1] 2026-01-17 00:33:38.570724 | orchestrator | changed: [testbed-node-2] 2026-01-17 00:33:38.570730 | orchestrator | 2026-01-17 00:33:38.570736 | orchestrator | TASK [osism.services.docker : Add repository] ********************************** 2026-01-17 00:33:38.570742 | orchestrator | Saturday 17 January 2026 00:32:44 +0000 (0:00:01.114) 0:05:26.962 ****** 2026-01-17 00:33:38.570751 | orchestrator | ok: [testbed-manager] 2026-01-17 00:33:38.570762 | orchestrator | changed: [testbed-node-4] 2026-01-17 00:33:38.570771 | orchestrator | changed: [testbed-node-0] 2026-01-17 00:33:38.570781 | orchestrator | changed: [testbed-node-3] 2026-01-17 00:33:38.570791 | orchestrator | changed: [testbed-node-1] 2026-01-17 00:33:38.570827 | orchestrator | changed: [testbed-node-5] 2026-01-17 00:33:38.570838 | orchestrator | changed: [testbed-node-2] 2026-01-17 00:33:38.570849 | orchestrator | 2026-01-17 00:33:38.570859 | orchestrator | TASK [osism.services.docker : Update package cache] **************************** 2026-01-17 00:33:38.570870 | orchestrator | Saturday 17 January 2026 00:32:52 +0000 (0:00:08.008) 0:05:34.971 ****** 2026-01-17 00:33:38.570880 | orchestrator | changed: [testbed-manager] 2026-01-17 00:33:38.570889 | orchestrator | changed: [testbed-node-3] 2026-01-17 00:33:38.570899 | orchestrator | changed: [testbed-node-5] 2026-01-17 00:33:38.570909 | orchestrator | changed: [testbed-node-4] 2026-01-17 00:33:38.570918 | orchestrator | changed: [testbed-node-0] 2026-01-17 00:33:38.570927 | orchestrator | changed: [testbed-node-1] 2026-01-17 00:33:38.570936 | orchestrator | changed: [testbed-node-2] 2026-01-17 00:33:38.570946 | orchestrator | 2026-01-17 00:33:38.570956 | orchestrator | TASK [osism.services.docker : Pin docker package version] ********************** 2026-01-17 00:33:38.570966 | orchestrator | Saturday 17 January 2026 00:32:55 +0000 (0:00:03.338) 0:05:38.309 ****** 2026-01-17 00:33:38.570977 | orchestrator | ok: [testbed-manager] 2026-01-17 00:33:38.570987 | orchestrator | changed: [testbed-node-3] 2026-01-17 00:33:38.570997 | orchestrator | changed: [testbed-node-4] 2026-01-17 00:33:38.571008 | orchestrator | changed: [testbed-node-5] 2026-01-17 00:33:38.571018 | orchestrator | changed: [testbed-node-0] 2026-01-17 00:33:38.571028 | orchestrator | changed: [testbed-node-1] 2026-01-17 00:33:38.571039 | orchestrator | changed: [testbed-node-2] 2026-01-17 00:33:38.571051 | orchestrator | 2026-01-17 00:33:38.571061 | orchestrator | TASK [osism.services.docker : Pin docker-cli package version] ****************** 2026-01-17 00:33:38.571072 | orchestrator | Saturday 17 January 2026 00:32:57 +0000 (0:00:01.392) 0:05:39.702 ****** 2026-01-17 00:33:38.571084 | orchestrator | ok: [testbed-manager] 2026-01-17 00:33:38.571094 | orchestrator | changed: [testbed-node-3] 2026-01-17 00:33:38.571105 | orchestrator | changed: [testbed-node-4] 2026-01-17 00:33:38.571115 | orchestrator | changed: [testbed-node-0] 2026-01-17 00:33:38.571125 | orchestrator | changed: [testbed-node-5] 2026-01-17 00:33:38.571134 | orchestrator | changed: [testbed-node-1] 2026-01-17 00:33:38.571160 | orchestrator | changed: [testbed-node-2] 2026-01-17 00:33:38.571173 | orchestrator | 2026-01-17 00:33:38.571184 | orchestrator | TASK [osism.services.docker : Unlock containerd package] *********************** 2026-01-17 00:33:38.571194 | orchestrator | Saturday 17 January 2026 00:32:58 +0000 (0:00:01.712) 0:05:41.414 ****** 2026-01-17 00:33:38.571206 | orchestrator | skipping: [testbed-node-3] 2026-01-17 00:33:38.571217 | orchestrator | skipping: [testbed-node-4] 2026-01-17 00:33:38.571228 | orchestrator | skipping: [testbed-node-5] 2026-01-17 00:33:38.571237 | orchestrator | skipping: [testbed-node-0] 2026-01-17 00:33:38.571248 | orchestrator | skipping: [testbed-node-1] 2026-01-17 00:33:38.571259 | orchestrator | skipping: [testbed-node-2] 2026-01-17 00:33:38.571270 | orchestrator | changed: [testbed-manager] 2026-01-17 00:33:38.571280 | orchestrator | 2026-01-17 00:33:38.571291 | orchestrator | TASK [osism.services.docker : Install containerd package] ********************** 2026-01-17 00:33:38.571302 | orchestrator | Saturday 17 January 2026 00:32:59 +0000 (0:00:00.653) 0:05:42.068 ****** 2026-01-17 00:33:38.571313 | orchestrator | ok: [testbed-manager] 2026-01-17 00:33:38.571324 | orchestrator | changed: [testbed-node-0] 2026-01-17 00:33:38.571334 | orchestrator | changed: [testbed-node-1] 2026-01-17 00:33:38.571345 | orchestrator | changed: [testbed-node-3] 2026-01-17 00:33:38.571355 | orchestrator | changed: [testbed-node-4] 2026-01-17 00:33:38.571365 | orchestrator | changed: [testbed-node-2] 2026-01-17 00:33:38.571375 | orchestrator | changed: [testbed-node-5] 2026-01-17 00:33:38.571384 | orchestrator | 2026-01-17 00:33:38.571393 | orchestrator | TASK [osism.services.docker : Lock containerd package] ************************* 2026-01-17 00:33:38.571424 | orchestrator | Saturday 17 January 2026 00:33:09 +0000 (0:00:09.873) 0:05:51.941 ****** 2026-01-17 00:33:38.571435 | orchestrator | changed: [testbed-manager] 2026-01-17 00:33:38.571512 | orchestrator | changed: [testbed-node-3] 2026-01-17 00:33:38.571541 | orchestrator | changed: [testbed-node-4] 2026-01-17 00:33:38.571563 | orchestrator | changed: [testbed-node-5] 2026-01-17 00:33:38.571573 | orchestrator | changed: [testbed-node-0] 2026-01-17 00:33:38.571579 | orchestrator | changed: [testbed-node-1] 2026-01-17 00:33:38.571588 | orchestrator | changed: [testbed-node-2] 2026-01-17 00:33:38.571599 | orchestrator | 2026-01-17 00:33:38.571609 | orchestrator | TASK [osism.services.docker : Install docker-cli package] ********************** 2026-01-17 00:33:38.571620 | orchestrator | Saturday 17 January 2026 00:33:10 +0000 (0:00:00.922) 0:05:52.863 ****** 2026-01-17 00:33:38.571630 | orchestrator | ok: [testbed-manager] 2026-01-17 00:33:38.571641 | orchestrator | changed: [testbed-node-4] 2026-01-17 00:33:38.571651 | orchestrator | changed: [testbed-node-1] 2026-01-17 00:33:38.571662 | orchestrator | changed: [testbed-node-5] 2026-01-17 00:33:38.571672 | orchestrator | changed: [testbed-node-0] 2026-01-17 00:33:38.571682 | orchestrator | changed: [testbed-node-3] 2026-01-17 00:33:38.571693 | orchestrator | changed: [testbed-node-2] 2026-01-17 00:33:38.571704 | orchestrator | 2026-01-17 00:33:38.571714 | orchestrator | TASK [osism.services.docker : Install docker package] ************************** 2026-01-17 00:33:38.571725 | orchestrator | Saturday 17 January 2026 00:33:20 +0000 (0:00:09.669) 0:06:02.533 ****** 2026-01-17 00:33:38.571735 | orchestrator | ok: [testbed-manager] 2026-01-17 00:33:38.571745 | orchestrator | changed: [testbed-node-4] 2026-01-17 00:33:38.571755 | orchestrator | changed: [testbed-node-0] 2026-01-17 00:33:38.571765 | orchestrator | changed: [testbed-node-1] 2026-01-17 00:33:38.571775 | orchestrator | changed: [testbed-node-3] 2026-01-17 00:33:38.571785 | orchestrator | changed: [testbed-node-2] 2026-01-17 00:33:38.571796 | orchestrator | changed: [testbed-node-5] 2026-01-17 00:33:38.571806 | orchestrator | 2026-01-17 00:33:38.571814 | orchestrator | TASK [osism.services.docker : Unblock installation of python docker packages] *** 2026-01-17 00:33:38.571820 | orchestrator | Saturday 17 January 2026 00:33:31 +0000 (0:00:11.258) 0:06:13.791 ****** 2026-01-17 00:33:38.571826 | orchestrator | ok: [testbed-manager] => (item=python3-docker) 2026-01-17 00:33:38.571833 | orchestrator | ok: [testbed-node-3] => (item=python3-docker) 2026-01-17 00:33:38.571839 | orchestrator | ok: [testbed-node-4] => (item=python3-docker) 2026-01-17 00:33:38.571845 | orchestrator | ok: [testbed-node-5] => (item=python3-docker) 2026-01-17 00:33:38.571851 | orchestrator | ok: [testbed-node-0] => (item=python3-docker) 2026-01-17 00:33:38.571857 | orchestrator | ok: [testbed-manager] => (item=python-docker) 2026-01-17 00:33:38.571863 | orchestrator | ok: [testbed-node-1] => (item=python3-docker) 2026-01-17 00:33:38.571869 | orchestrator | ok: [testbed-node-3] => (item=python-docker) 2026-01-17 00:33:38.571875 | orchestrator | ok: [testbed-node-2] => (item=python3-docker) 2026-01-17 00:33:38.571881 | orchestrator | ok: [testbed-node-4] => (item=python-docker) 2026-01-17 00:33:38.571887 | orchestrator | ok: [testbed-node-5] => (item=python-docker) 2026-01-17 00:33:38.571894 | orchestrator | ok: [testbed-node-0] => (item=python-docker) 2026-01-17 00:33:38.571900 | orchestrator | ok: [testbed-node-1] => (item=python-docker) 2026-01-17 00:33:38.571906 | orchestrator | ok: [testbed-node-2] => (item=python-docker) 2026-01-17 00:33:38.571912 | orchestrator | 2026-01-17 00:33:38.571918 | orchestrator | TASK [osism.services.docker : Install python3 docker package] ****************** 2026-01-17 00:33:38.571924 | orchestrator | Saturday 17 January 2026 00:33:32 +0000 (0:00:01.243) 0:06:15.034 ****** 2026-01-17 00:33:38.571930 | orchestrator | skipping: [testbed-manager] 2026-01-17 00:33:38.571937 | orchestrator | skipping: [testbed-node-3] 2026-01-17 00:33:38.571947 | orchestrator | skipping: [testbed-node-4] 2026-01-17 00:33:38.571957 | orchestrator | skipping: [testbed-node-5] 2026-01-17 00:33:38.571968 | orchestrator | skipping: [testbed-node-0] 2026-01-17 00:33:38.571978 | orchestrator | skipping: [testbed-node-1] 2026-01-17 00:33:38.571987 | orchestrator | skipping: [testbed-node-2] 2026-01-17 00:33:38.571997 | orchestrator | 2026-01-17 00:33:38.572008 | orchestrator | TASK [osism.services.docker : Install python3 docker package from Debian Sid] *** 2026-01-17 00:33:38.572019 | orchestrator | Saturday 17 January 2026 00:33:33 +0000 (0:00:00.588) 0:06:15.623 ****** 2026-01-17 00:33:38.572038 | orchestrator | ok: [testbed-manager] 2026-01-17 00:33:38.572049 | orchestrator | changed: [testbed-node-4] 2026-01-17 00:33:38.572059 | orchestrator | changed: [testbed-node-3] 2026-01-17 00:33:38.572107 | orchestrator | changed: [testbed-node-1] 2026-01-17 00:33:38.572119 | orchestrator | changed: [testbed-node-5] 2026-01-17 00:33:38.572126 | orchestrator | changed: [testbed-node-0] 2026-01-17 00:33:38.572132 | orchestrator | changed: [testbed-node-2] 2026-01-17 00:33:38.572138 | orchestrator | 2026-01-17 00:33:38.572144 | orchestrator | TASK [osism.services.docker : Remove python docker packages (install python bindings from pip)] *** 2026-01-17 00:33:38.572159 | orchestrator | Saturday 17 January 2026 00:33:37 +0000 (0:00:04.304) 0:06:19.927 ****** 2026-01-17 00:33:38.572165 | orchestrator | skipping: [testbed-manager] 2026-01-17 00:33:38.572171 | orchestrator | skipping: [testbed-node-3] 2026-01-17 00:33:38.572177 | orchestrator | skipping: [testbed-node-4] 2026-01-17 00:33:38.572183 | orchestrator | skipping: [testbed-node-5] 2026-01-17 00:33:38.572189 | orchestrator | skipping: [testbed-node-0] 2026-01-17 00:33:38.572195 | orchestrator | skipping: [testbed-node-1] 2026-01-17 00:33:38.572201 | orchestrator | skipping: [testbed-node-2] 2026-01-17 00:33:38.572207 | orchestrator | 2026-01-17 00:33:38.572214 | orchestrator | TASK [osism.services.docker : Block installation of python docker packages (install python bindings from pip)] *** 2026-01-17 00:33:38.572221 | orchestrator | Saturday 17 January 2026 00:33:38 +0000 (0:00:00.561) 0:06:20.488 ****** 2026-01-17 00:33:38.572227 | orchestrator | skipping: [testbed-manager] => (item=python3-docker)  2026-01-17 00:33:38.572233 | orchestrator | skipping: [testbed-manager] => (item=python-docker)  2026-01-17 00:33:38.572240 | orchestrator | skipping: [testbed-manager] 2026-01-17 00:33:38.572246 | orchestrator | skipping: [testbed-node-3] => (item=python3-docker)  2026-01-17 00:33:38.572252 | orchestrator | skipping: [testbed-node-3] => (item=python-docker)  2026-01-17 00:33:38.572295 | orchestrator | skipping: [testbed-node-3] 2026-01-17 00:33:38.572303 | orchestrator | skipping: [testbed-node-4] => (item=python3-docker)  2026-01-17 00:33:38.572309 | orchestrator | skipping: [testbed-node-4] => (item=python-docker)  2026-01-17 00:33:38.572315 | orchestrator | skipping: [testbed-node-4] 2026-01-17 00:33:38.572331 | orchestrator | skipping: [testbed-node-5] => (item=python3-docker)  2026-01-17 00:33:59.058100 | orchestrator | skipping: [testbed-node-5] => (item=python-docker)  2026-01-17 00:33:59.058197 | orchestrator | skipping: [testbed-node-5] 2026-01-17 00:33:59.058209 | orchestrator | skipping: [testbed-node-0] => (item=python3-docker)  2026-01-17 00:33:59.058218 | orchestrator | skipping: [testbed-node-0] => (item=python-docker)  2026-01-17 00:33:59.058227 | orchestrator | skipping: [testbed-node-0] 2026-01-17 00:33:59.058236 | orchestrator | skipping: [testbed-node-1] => (item=python3-docker)  2026-01-17 00:33:59.058244 | orchestrator | skipping: [testbed-node-1] => (item=python-docker)  2026-01-17 00:33:59.058252 | orchestrator | skipping: [testbed-node-1] 2026-01-17 00:33:59.058260 | orchestrator | skipping: [testbed-node-2] => (item=python3-docker)  2026-01-17 00:33:59.058268 | orchestrator | skipping: [testbed-node-2] => (item=python-docker)  2026-01-17 00:33:59.058276 | orchestrator | skipping: [testbed-node-2] 2026-01-17 00:33:59.058284 | orchestrator | 2026-01-17 00:33:59.058293 | orchestrator | TASK [osism.services.docker : Install python3-pip package (install python bindings from pip)] *** 2026-01-17 00:33:59.058303 | orchestrator | Saturday 17 January 2026 00:33:38 +0000 (0:00:00.777) 0:06:21.266 ****** 2026-01-17 00:33:59.058311 | orchestrator | skipping: [testbed-manager] 2026-01-17 00:33:59.058319 | orchestrator | skipping: [testbed-node-3] 2026-01-17 00:33:59.058327 | orchestrator | skipping: [testbed-node-4] 2026-01-17 00:33:59.058335 | orchestrator | skipping: [testbed-node-5] 2026-01-17 00:33:59.058342 | orchestrator | skipping: [testbed-node-0] 2026-01-17 00:33:59.058350 | orchestrator | skipping: [testbed-node-1] 2026-01-17 00:33:59.058358 | orchestrator | skipping: [testbed-node-2] 2026-01-17 00:33:59.058366 | orchestrator | 2026-01-17 00:33:59.058374 | orchestrator | TASK [osism.services.docker : Install docker packages (install python bindings from pip)] *** 2026-01-17 00:33:59.058403 | orchestrator | Saturday 17 January 2026 00:33:39 +0000 (0:00:00.566) 0:06:21.832 ****** 2026-01-17 00:33:59.058411 | orchestrator | skipping: [testbed-manager] 2026-01-17 00:33:59.058419 | orchestrator | skipping: [testbed-node-3] 2026-01-17 00:33:59.058427 | orchestrator | skipping: [testbed-node-4] 2026-01-17 00:33:59.058435 | orchestrator | skipping: [testbed-node-5] 2026-01-17 00:33:59.058442 | orchestrator | skipping: [testbed-node-0] 2026-01-17 00:33:59.058450 | orchestrator | skipping: [testbed-node-1] 2026-01-17 00:33:59.058458 | orchestrator | skipping: [testbed-node-2] 2026-01-17 00:33:59.058466 | orchestrator | 2026-01-17 00:33:59.058526 | orchestrator | TASK [osism.services.docker : Install packages required by docker login] ******* 2026-01-17 00:33:59.058542 | orchestrator | Saturday 17 January 2026 00:33:39 +0000 (0:00:00.509) 0:06:22.341 ****** 2026-01-17 00:33:59.058556 | orchestrator | skipping: [testbed-manager] 2026-01-17 00:33:59.058565 | orchestrator | skipping: [testbed-node-3] 2026-01-17 00:33:59.058572 | orchestrator | skipping: [testbed-node-4] 2026-01-17 00:33:59.058580 | orchestrator | skipping: [testbed-node-5] 2026-01-17 00:33:59.058588 | orchestrator | skipping: [testbed-node-0] 2026-01-17 00:33:59.058595 | orchestrator | skipping: [testbed-node-1] 2026-01-17 00:33:59.058603 | orchestrator | skipping: [testbed-node-2] 2026-01-17 00:33:59.058612 | orchestrator | 2026-01-17 00:33:59.058621 | orchestrator | TASK [osism.services.docker : Ensure that some packages are not installed] ***** 2026-01-17 00:33:59.058630 | orchestrator | Saturday 17 January 2026 00:33:40 +0000 (0:00:00.538) 0:06:22.880 ****** 2026-01-17 00:33:59.058640 | orchestrator | ok: [testbed-manager] 2026-01-17 00:33:59.058649 | orchestrator | ok: [testbed-node-3] 2026-01-17 00:33:59.058659 | orchestrator | ok: [testbed-node-5] 2026-01-17 00:33:59.058667 | orchestrator | ok: [testbed-node-4] 2026-01-17 00:33:59.058676 | orchestrator | ok: [testbed-node-1] 2026-01-17 00:33:59.058684 | orchestrator | ok: [testbed-node-0] 2026-01-17 00:33:59.058693 | orchestrator | ok: [testbed-node-2] 2026-01-17 00:33:59.058702 | orchestrator | 2026-01-17 00:33:59.058711 | orchestrator | TASK [osism.services.docker : Include config tasks] **************************** 2026-01-17 00:33:59.058720 | orchestrator | Saturday 17 January 2026 00:33:42 +0000 (0:00:01.982) 0:06:24.862 ****** 2026-01-17 00:33:59.058730 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/config.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-17 00:33:59.058741 | orchestrator | 2026-01-17 00:33:59.058751 | orchestrator | TASK [osism.services.docker : Create plugins directory] ************************ 2026-01-17 00:33:59.058760 | orchestrator | Saturday 17 January 2026 00:33:43 +0000 (0:00:00.868) 0:06:25.731 ****** 2026-01-17 00:33:59.058769 | orchestrator | ok: [testbed-manager] 2026-01-17 00:33:59.058778 | orchestrator | changed: [testbed-node-3] 2026-01-17 00:33:59.058786 | orchestrator | changed: [testbed-node-4] 2026-01-17 00:33:59.058795 | orchestrator | changed: [testbed-node-5] 2026-01-17 00:33:59.058817 | orchestrator | changed: [testbed-node-0] 2026-01-17 00:33:59.058827 | orchestrator | changed: [testbed-node-1] 2026-01-17 00:33:59.058836 | orchestrator | changed: [testbed-node-2] 2026-01-17 00:33:59.058845 | orchestrator | 2026-01-17 00:33:59.058895 | orchestrator | TASK [osism.services.docker : Create systemd overlay directory] **************** 2026-01-17 00:33:59.058906 | orchestrator | Saturday 17 January 2026 00:33:44 +0000 (0:00:00.920) 0:06:26.652 ****** 2026-01-17 00:33:59.058915 | orchestrator | ok: [testbed-manager] 2026-01-17 00:33:59.058924 | orchestrator | changed: [testbed-node-3] 2026-01-17 00:33:59.058934 | orchestrator | changed: [testbed-node-4] 2026-01-17 00:33:59.058943 | orchestrator | changed: [testbed-node-5] 2026-01-17 00:33:59.058952 | orchestrator | changed: [testbed-node-0] 2026-01-17 00:33:59.058961 | orchestrator | changed: [testbed-node-1] 2026-01-17 00:33:59.058970 | orchestrator | changed: [testbed-node-2] 2026-01-17 00:33:59.058978 | orchestrator | 2026-01-17 00:33:59.058987 | orchestrator | TASK [osism.services.docker : Copy systemd overlay file] *********************** 2026-01-17 00:33:59.059002 | orchestrator | Saturday 17 January 2026 00:33:45 +0000 (0:00:00.862) 0:06:27.514 ****** 2026-01-17 00:33:59.059010 | orchestrator | ok: [testbed-manager] 2026-01-17 00:33:59.059018 | orchestrator | changed: [testbed-node-3] 2026-01-17 00:33:59.059026 | orchestrator | changed: [testbed-node-4] 2026-01-17 00:33:59.059034 | orchestrator | changed: [testbed-node-5] 2026-01-17 00:33:59.059042 | orchestrator | changed: [testbed-node-0] 2026-01-17 00:33:59.059050 | orchestrator | changed: [testbed-node-1] 2026-01-17 00:33:59.059058 | orchestrator | changed: [testbed-node-2] 2026-01-17 00:33:59.059066 | orchestrator | 2026-01-17 00:33:59.059074 | orchestrator | TASK [osism.services.docker : Reload systemd daemon if systemd overlay file is changed] *** 2026-01-17 00:33:59.059097 | orchestrator | Saturday 17 January 2026 00:33:46 +0000 (0:00:01.640) 0:06:29.155 ****** 2026-01-17 00:33:59.059106 | orchestrator | skipping: [testbed-manager] 2026-01-17 00:33:59.059113 | orchestrator | ok: [testbed-node-4] 2026-01-17 00:33:59.059121 | orchestrator | ok: [testbed-node-3] 2026-01-17 00:33:59.059129 | orchestrator | ok: [testbed-node-5] 2026-01-17 00:33:59.059137 | orchestrator | ok: [testbed-node-0] 2026-01-17 00:33:59.059144 | orchestrator | ok: [testbed-node-1] 2026-01-17 00:33:59.059152 | orchestrator | ok: [testbed-node-2] 2026-01-17 00:33:59.059160 | orchestrator | 2026-01-17 00:33:59.059168 | orchestrator | TASK [osism.services.docker : Copy limits configuration file] ****************** 2026-01-17 00:33:59.059176 | orchestrator | Saturday 17 January 2026 00:33:48 +0000 (0:00:01.448) 0:06:30.603 ****** 2026-01-17 00:33:59.059184 | orchestrator | ok: [testbed-manager] 2026-01-17 00:33:59.059192 | orchestrator | changed: [testbed-node-4] 2026-01-17 00:33:59.059199 | orchestrator | changed: [testbed-node-3] 2026-01-17 00:33:59.059207 | orchestrator | changed: [testbed-node-5] 2026-01-17 00:33:59.059215 | orchestrator | changed: [testbed-node-0] 2026-01-17 00:33:59.059223 | orchestrator | changed: [testbed-node-1] 2026-01-17 00:33:59.059230 | orchestrator | changed: [testbed-node-2] 2026-01-17 00:33:59.059238 | orchestrator | 2026-01-17 00:33:59.059246 | orchestrator | TASK [osism.services.docker : Copy daemon.json configuration file] ************* 2026-01-17 00:33:59.059254 | orchestrator | Saturday 17 January 2026 00:33:49 +0000 (0:00:01.438) 0:06:32.041 ****** 2026-01-17 00:33:59.059262 | orchestrator | changed: [testbed-manager] 2026-01-17 00:33:59.059269 | orchestrator | changed: [testbed-node-3] 2026-01-17 00:33:59.059277 | orchestrator | changed: [testbed-node-4] 2026-01-17 00:33:59.059285 | orchestrator | changed: [testbed-node-5] 2026-01-17 00:33:59.059292 | orchestrator | changed: [testbed-node-0] 2026-01-17 00:33:59.059300 | orchestrator | changed: [testbed-node-1] 2026-01-17 00:33:59.059308 | orchestrator | changed: [testbed-node-2] 2026-01-17 00:33:59.059316 | orchestrator | 2026-01-17 00:33:59.059324 | orchestrator | TASK [osism.services.docker : Include service tasks] *************************** 2026-01-17 00:33:59.059331 | orchestrator | Saturday 17 January 2026 00:33:51 +0000 (0:00:01.486) 0:06:33.528 ****** 2026-01-17 00:33:59.059339 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/service.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-17 00:33:59.059348 | orchestrator | 2026-01-17 00:33:59.059355 | orchestrator | TASK [osism.services.docker : Reload systemd daemon] *************************** 2026-01-17 00:33:59.059363 | orchestrator | Saturday 17 January 2026 00:33:52 +0000 (0:00:01.083) 0:06:34.612 ****** 2026-01-17 00:33:59.059371 | orchestrator | ok: [testbed-manager] 2026-01-17 00:33:59.059379 | orchestrator | ok: [testbed-node-3] 2026-01-17 00:33:59.059387 | orchestrator | ok: [testbed-node-4] 2026-01-17 00:33:59.059395 | orchestrator | ok: [testbed-node-5] 2026-01-17 00:33:59.059402 | orchestrator | ok: [testbed-node-0] 2026-01-17 00:33:59.059410 | orchestrator | ok: [testbed-node-1] 2026-01-17 00:33:59.059418 | orchestrator | ok: [testbed-node-2] 2026-01-17 00:33:59.059426 | orchestrator | 2026-01-17 00:33:59.059434 | orchestrator | TASK [osism.services.docker : Manage service] ********************************** 2026-01-17 00:33:59.059441 | orchestrator | Saturday 17 January 2026 00:33:53 +0000 (0:00:01.407) 0:06:36.019 ****** 2026-01-17 00:33:59.059461 | orchestrator | ok: [testbed-node-4] 2026-01-17 00:33:59.059470 | orchestrator | ok: [testbed-node-3] 2026-01-17 00:33:59.059502 | orchestrator | ok: [testbed-node-5] 2026-01-17 00:33:59.059510 | orchestrator | ok: [testbed-node-0] 2026-01-17 00:33:59.059517 | orchestrator | ok: [testbed-node-1] 2026-01-17 00:33:59.059525 | orchestrator | ok: [testbed-node-2] 2026-01-17 00:33:59.059533 | orchestrator | ok: [testbed-manager] 2026-01-17 00:33:59.059541 | orchestrator | 2026-01-17 00:33:59.059549 | orchestrator | TASK [osism.services.docker : Manage docker socket service] ******************** 2026-01-17 00:33:59.059557 | orchestrator | Saturday 17 January 2026 00:33:55 +0000 (0:00:01.716) 0:06:37.735 ****** 2026-01-17 00:33:59.059565 | orchestrator | ok: [testbed-manager] 2026-01-17 00:33:59.059573 | orchestrator | ok: [testbed-node-3] 2026-01-17 00:33:59.059580 | orchestrator | ok: [testbed-node-4] 2026-01-17 00:33:59.059588 | orchestrator | ok: [testbed-node-5] 2026-01-17 00:33:59.059596 | orchestrator | ok: [testbed-node-0] 2026-01-17 00:33:59.059603 | orchestrator | ok: [testbed-node-1] 2026-01-17 00:33:59.059611 | orchestrator | ok: [testbed-node-2] 2026-01-17 00:33:59.059619 | orchestrator | 2026-01-17 00:33:59.059627 | orchestrator | TASK [osism.services.docker : Manage containerd service] *********************** 2026-01-17 00:33:59.059635 | orchestrator | Saturday 17 January 2026 00:33:56 +0000 (0:00:01.156) 0:06:38.892 ****** 2026-01-17 00:33:59.059643 | orchestrator | ok: [testbed-manager] 2026-01-17 00:33:59.059651 | orchestrator | ok: [testbed-node-3] 2026-01-17 00:33:59.059659 | orchestrator | ok: [testbed-node-4] 2026-01-17 00:33:59.059666 | orchestrator | ok: [testbed-node-0] 2026-01-17 00:33:59.059674 | orchestrator | ok: [testbed-node-5] 2026-01-17 00:33:59.059682 | orchestrator | ok: [testbed-node-1] 2026-01-17 00:33:59.059690 | orchestrator | ok: [testbed-node-2] 2026-01-17 00:33:59.059697 | orchestrator | 2026-01-17 00:33:59.059706 | orchestrator | TASK [osism.services.docker : Include bootstrap tasks] ************************* 2026-01-17 00:33:59.059714 | orchestrator | Saturday 17 January 2026 00:33:57 +0000 (0:00:01.349) 0:06:40.241 ****** 2026-01-17 00:33:59.059722 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/bootstrap.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-17 00:33:59.059730 | orchestrator | 2026-01-17 00:33:59.059738 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-01-17 00:33:59.059746 | orchestrator | Saturday 17 January 2026 00:33:58 +0000 (0:00:00.924) 0:06:41.166 ****** 2026-01-17 00:33:59.059754 | orchestrator | 2026-01-17 00:33:59.059762 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-01-17 00:33:59.059770 | orchestrator | Saturday 17 January 2026 00:33:58 +0000 (0:00:00.042) 0:06:41.208 ****** 2026-01-17 00:33:59.059777 | orchestrator | 2026-01-17 00:33:59.059785 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-01-17 00:33:59.059793 | orchestrator | Saturday 17 January 2026 00:33:58 +0000 (0:00:00.039) 0:06:41.248 ****** 2026-01-17 00:33:59.059801 | orchestrator | 2026-01-17 00:33:59.059809 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-01-17 00:33:59.059822 | orchestrator | Saturday 17 January 2026 00:33:58 +0000 (0:00:00.046) 0:06:41.295 ****** 2026-01-17 00:34:25.643258 | orchestrator | 2026-01-17 00:34:25.643368 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-01-17 00:34:25.643384 | orchestrator | Saturday 17 January 2026 00:33:58 +0000 (0:00:00.040) 0:06:41.335 ****** 2026-01-17 00:34:25.643396 | orchestrator | 2026-01-17 00:34:25.643407 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-01-17 00:34:25.643418 | orchestrator | Saturday 17 January 2026 00:33:58 +0000 (0:00:00.039) 0:06:41.375 ****** 2026-01-17 00:34:25.643495 | orchestrator | 2026-01-17 00:34:25.643514 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-01-17 00:34:25.643533 | orchestrator | Saturday 17 January 2026 00:33:58 +0000 (0:00:00.046) 0:06:41.422 ****** 2026-01-17 00:34:25.643569 | orchestrator | 2026-01-17 00:34:25.643580 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2026-01-17 00:34:25.643591 | orchestrator | Saturday 17 January 2026 00:33:59 +0000 (0:00:00.041) 0:06:41.463 ****** 2026-01-17 00:34:25.643609 | orchestrator | ok: [testbed-node-0] 2026-01-17 00:34:25.643628 | orchestrator | ok: [testbed-node-1] 2026-01-17 00:34:25.643645 | orchestrator | ok: [testbed-node-2] 2026-01-17 00:34:25.643663 | orchestrator | 2026-01-17 00:34:25.643680 | orchestrator | RUNNING HANDLER [osism.services.rsyslog : Restart rsyslog service] ************* 2026-01-17 00:34:25.643696 | orchestrator | Saturday 17 January 2026 00:34:00 +0000 (0:00:01.350) 0:06:42.813 ****** 2026-01-17 00:34:25.643713 | orchestrator | changed: [testbed-manager] 2026-01-17 00:34:25.643732 | orchestrator | changed: [testbed-node-3] 2026-01-17 00:34:25.643749 | orchestrator | changed: [testbed-node-4] 2026-01-17 00:34:25.643766 | orchestrator | changed: [testbed-node-0] 2026-01-17 00:34:25.643783 | orchestrator | changed: [testbed-node-5] 2026-01-17 00:34:25.643799 | orchestrator | changed: [testbed-node-2] 2026-01-17 00:34:25.643817 | orchestrator | changed: [testbed-node-1] 2026-01-17 00:34:25.643833 | orchestrator | 2026-01-17 00:34:25.643851 | orchestrator | RUNNING HANDLER [osism.services.rsyslog : Restart logrotate service] *********** 2026-01-17 00:34:25.643871 | orchestrator | Saturday 17 January 2026 00:34:02 +0000 (0:00:01.663) 0:06:44.477 ****** 2026-01-17 00:34:25.643889 | orchestrator | changed: [testbed-node-3] 2026-01-17 00:34:25.643907 | orchestrator | changed: [testbed-manager] 2026-01-17 00:34:25.643926 | orchestrator | changed: [testbed-node-4] 2026-01-17 00:34:25.643944 | orchestrator | changed: [testbed-node-5] 2026-01-17 00:34:25.643962 | orchestrator | changed: [testbed-node-0] 2026-01-17 00:34:25.643981 | orchestrator | changed: [testbed-node-1] 2026-01-17 00:34:25.643999 | orchestrator | changed: [testbed-node-2] 2026-01-17 00:34:25.644018 | orchestrator | 2026-01-17 00:34:25.644035 | orchestrator | RUNNING HANDLER [osism.services.docker : Restart docker service] *************** 2026-01-17 00:34:25.644054 | orchestrator | Saturday 17 January 2026 00:34:03 +0000 (0:00:01.237) 0:06:45.715 ****** 2026-01-17 00:34:25.644071 | orchestrator | skipping: [testbed-manager] 2026-01-17 00:34:25.644087 | orchestrator | changed: [testbed-node-3] 2026-01-17 00:34:25.644104 | orchestrator | changed: [testbed-node-4] 2026-01-17 00:34:25.644121 | orchestrator | changed: [testbed-node-5] 2026-01-17 00:34:25.644139 | orchestrator | changed: [testbed-node-0] 2026-01-17 00:34:25.644157 | orchestrator | changed: [testbed-node-2] 2026-01-17 00:34:25.644174 | orchestrator | changed: [testbed-node-1] 2026-01-17 00:34:25.644192 | orchestrator | 2026-01-17 00:34:25.644210 | orchestrator | RUNNING HANDLER [osism.services.docker : Wait after docker service restart] **** 2026-01-17 00:34:25.644229 | orchestrator | Saturday 17 January 2026 00:34:05 +0000 (0:00:02.175) 0:06:47.890 ****** 2026-01-17 00:34:25.644245 | orchestrator | skipping: [testbed-node-3] 2026-01-17 00:34:25.644263 | orchestrator | 2026-01-17 00:34:25.644280 | orchestrator | TASK [osism.services.docker : Add user to docker group] ************************ 2026-01-17 00:34:25.644299 | orchestrator | Saturday 17 January 2026 00:34:05 +0000 (0:00:00.108) 0:06:47.999 ****** 2026-01-17 00:34:25.644316 | orchestrator | ok: [testbed-manager] 2026-01-17 00:34:25.644333 | orchestrator | changed: [testbed-node-4] 2026-01-17 00:34:25.644350 | orchestrator | changed: [testbed-node-3] 2026-01-17 00:34:25.644367 | orchestrator | changed: [testbed-node-5] 2026-01-17 00:34:25.644383 | orchestrator | changed: [testbed-node-0] 2026-01-17 00:34:25.644400 | orchestrator | changed: [testbed-node-1] 2026-01-17 00:34:25.644417 | orchestrator | changed: [testbed-node-2] 2026-01-17 00:34:25.644466 | orchestrator | 2026-01-17 00:34:25.644486 | orchestrator | TASK [osism.services.docker : Log into private registry and force re-authorization] *** 2026-01-17 00:34:25.644508 | orchestrator | Saturday 17 January 2026 00:34:06 +0000 (0:00:01.017) 0:06:49.017 ****** 2026-01-17 00:34:25.644528 | orchestrator | skipping: [testbed-manager] 2026-01-17 00:34:25.644547 | orchestrator | skipping: [testbed-node-3] 2026-01-17 00:34:25.644558 | orchestrator | skipping: [testbed-node-4] 2026-01-17 00:34:25.644585 | orchestrator | skipping: [testbed-node-5] 2026-01-17 00:34:25.644596 | orchestrator | skipping: [testbed-node-0] 2026-01-17 00:34:25.644607 | orchestrator | skipping: [testbed-node-1] 2026-01-17 00:34:25.644632 | orchestrator | skipping: [testbed-node-2] 2026-01-17 00:34:25.644643 | orchestrator | 2026-01-17 00:34:25.644653 | orchestrator | TASK [osism.services.docker : Include facts tasks] ***************************** 2026-01-17 00:34:25.644664 | orchestrator | Saturday 17 January 2026 00:34:07 +0000 (0:00:00.516) 0:06:49.533 ****** 2026-01-17 00:34:25.644676 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/facts.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-17 00:34:25.644689 | orchestrator | 2026-01-17 00:34:25.644700 | orchestrator | TASK [osism.services.docker : Create facts directory] ************************** 2026-01-17 00:34:25.644710 | orchestrator | Saturday 17 January 2026 00:34:08 +0000 (0:00:01.099) 0:06:50.633 ****** 2026-01-17 00:34:25.644721 | orchestrator | ok: [testbed-manager] 2026-01-17 00:34:25.644732 | orchestrator | ok: [testbed-node-3] 2026-01-17 00:34:25.644742 | orchestrator | ok: [testbed-node-4] 2026-01-17 00:34:25.644753 | orchestrator | ok: [testbed-node-5] 2026-01-17 00:34:25.644763 | orchestrator | ok: [testbed-node-0] 2026-01-17 00:34:25.644773 | orchestrator | ok: [testbed-node-1] 2026-01-17 00:34:25.644784 | orchestrator | ok: [testbed-node-2] 2026-01-17 00:34:25.644794 | orchestrator | 2026-01-17 00:34:25.644805 | orchestrator | TASK [osism.services.docker : Copy docker fact files] ************************** 2026-01-17 00:34:25.644816 | orchestrator | Saturday 17 January 2026 00:34:09 +0000 (0:00:00.870) 0:06:51.503 ****** 2026-01-17 00:34:25.644826 | orchestrator | ok: [testbed-manager] => (item=docker_containers) 2026-01-17 00:34:25.644860 | orchestrator | changed: [testbed-node-3] => (item=docker_containers) 2026-01-17 00:34:25.644871 | orchestrator | changed: [testbed-node-4] => (item=docker_containers) 2026-01-17 00:34:25.644882 | orchestrator | changed: [testbed-node-5] => (item=docker_containers) 2026-01-17 00:34:25.644892 | orchestrator | changed: [testbed-node-0] => (item=docker_containers) 2026-01-17 00:34:25.644903 | orchestrator | changed: [testbed-node-1] => (item=docker_containers) 2026-01-17 00:34:25.644919 | orchestrator | changed: [testbed-node-2] => (item=docker_containers) 2026-01-17 00:34:25.644937 | orchestrator | ok: [testbed-manager] => (item=docker_images) 2026-01-17 00:34:25.644954 | orchestrator | changed: [testbed-node-3] => (item=docker_images) 2026-01-17 00:34:25.644972 | orchestrator | changed: [testbed-node-4] => (item=docker_images) 2026-01-17 00:34:25.644989 | orchestrator | changed: [testbed-node-5] => (item=docker_images) 2026-01-17 00:34:25.645006 | orchestrator | changed: [testbed-node-0] => (item=docker_images) 2026-01-17 00:34:25.645024 | orchestrator | changed: [testbed-node-1] => (item=docker_images) 2026-01-17 00:34:25.645042 | orchestrator | changed: [testbed-node-2] => (item=docker_images) 2026-01-17 00:34:25.645059 | orchestrator | 2026-01-17 00:34:25.645077 | orchestrator | TASK [osism.commons.docker_compose : This install type is not supported] ******* 2026-01-17 00:34:25.645096 | orchestrator | Saturday 17 January 2026 00:34:11 +0000 (0:00:02.648) 0:06:54.152 ****** 2026-01-17 00:34:25.645114 | orchestrator | skipping: [testbed-manager] 2026-01-17 00:34:25.645132 | orchestrator | skipping: [testbed-node-3] 2026-01-17 00:34:25.645151 | orchestrator | skipping: [testbed-node-4] 2026-01-17 00:34:25.645169 | orchestrator | skipping: [testbed-node-5] 2026-01-17 00:34:25.645186 | orchestrator | skipping: [testbed-node-0] 2026-01-17 00:34:25.645197 | orchestrator | skipping: [testbed-node-1] 2026-01-17 00:34:25.645207 | orchestrator | skipping: [testbed-node-2] 2026-01-17 00:34:25.645218 | orchestrator | 2026-01-17 00:34:25.645229 | orchestrator | TASK [osism.commons.docker_compose : Include distribution specific install tasks] *** 2026-01-17 00:34:25.645239 | orchestrator | Saturday 17 January 2026 00:34:12 +0000 (0:00:00.746) 0:06:54.898 ****** 2026-01-17 00:34:25.645252 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/docker_compose/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-17 00:34:25.645276 | orchestrator | 2026-01-17 00:34:25.645288 | orchestrator | TASK [osism.commons.docker_compose : Remove docker-compose apt preferences file] *** 2026-01-17 00:34:25.645298 | orchestrator | Saturday 17 January 2026 00:34:13 +0000 (0:00:00.828) 0:06:55.727 ****** 2026-01-17 00:34:25.645309 | orchestrator | ok: [testbed-manager] 2026-01-17 00:34:25.645319 | orchestrator | ok: [testbed-node-3] 2026-01-17 00:34:25.645330 | orchestrator | ok: [testbed-node-4] 2026-01-17 00:34:25.645340 | orchestrator | ok: [testbed-node-0] 2026-01-17 00:34:25.645351 | orchestrator | ok: [testbed-node-5] 2026-01-17 00:34:25.645361 | orchestrator | ok: [testbed-node-1] 2026-01-17 00:34:25.645372 | orchestrator | ok: [testbed-node-2] 2026-01-17 00:34:25.645382 | orchestrator | 2026-01-17 00:34:25.645393 | orchestrator | TASK [osism.commons.docker_compose : Get checksum of docker-compose file] ****** 2026-01-17 00:34:25.645403 | orchestrator | Saturday 17 January 2026 00:34:14 +0000 (0:00:00.945) 0:06:56.672 ****** 2026-01-17 00:34:25.645414 | orchestrator | ok: [testbed-manager] 2026-01-17 00:34:25.645482 | orchestrator | ok: [testbed-node-3] 2026-01-17 00:34:25.645495 | orchestrator | ok: [testbed-node-4] 2026-01-17 00:34:25.645505 | orchestrator | ok: [testbed-node-5] 2026-01-17 00:34:25.645516 | orchestrator | ok: [testbed-node-0] 2026-01-17 00:34:25.645526 | orchestrator | ok: [testbed-node-1] 2026-01-17 00:34:25.645537 | orchestrator | ok: [testbed-node-2] 2026-01-17 00:34:25.645547 | orchestrator | 2026-01-17 00:34:25.645558 | orchestrator | TASK [osism.commons.docker_compose : Remove docker-compose binary] ************* 2026-01-17 00:34:25.645569 | orchestrator | Saturday 17 January 2026 00:34:15 +0000 (0:00:01.061) 0:06:57.734 ****** 2026-01-17 00:34:25.645579 | orchestrator | skipping: [testbed-manager] 2026-01-17 00:34:25.645590 | orchestrator | skipping: [testbed-node-3] 2026-01-17 00:34:25.645600 | orchestrator | skipping: [testbed-node-4] 2026-01-17 00:34:25.645611 | orchestrator | skipping: [testbed-node-5] 2026-01-17 00:34:25.645621 | orchestrator | skipping: [testbed-node-0] 2026-01-17 00:34:25.645632 | orchestrator | skipping: [testbed-node-1] 2026-01-17 00:34:25.645643 | orchestrator | skipping: [testbed-node-2] 2026-01-17 00:34:25.645653 | orchestrator | 2026-01-17 00:34:25.645664 | orchestrator | TASK [osism.commons.docker_compose : Uninstall docker-compose package] ********* 2026-01-17 00:34:25.645675 | orchestrator | Saturday 17 January 2026 00:34:15 +0000 (0:00:00.535) 0:06:58.270 ****** 2026-01-17 00:34:25.645686 | orchestrator | ok: [testbed-manager] 2026-01-17 00:34:25.645704 | orchestrator | ok: [testbed-node-3] 2026-01-17 00:34:25.645715 | orchestrator | ok: [testbed-node-4] 2026-01-17 00:34:25.645726 | orchestrator | ok: [testbed-node-0] 2026-01-17 00:34:25.645736 | orchestrator | ok: [testbed-node-5] 2026-01-17 00:34:25.645747 | orchestrator | ok: [testbed-node-1] 2026-01-17 00:34:25.645758 | orchestrator | ok: [testbed-node-2] 2026-01-17 00:34:25.645768 | orchestrator | 2026-01-17 00:34:25.645779 | orchestrator | TASK [osism.commons.docker_compose : Copy docker-compose script] *************** 2026-01-17 00:34:25.645789 | orchestrator | Saturday 17 January 2026 00:34:17 +0000 (0:00:01.649) 0:06:59.919 ****** 2026-01-17 00:34:25.645800 | orchestrator | skipping: [testbed-manager] 2026-01-17 00:34:25.645811 | orchestrator | skipping: [testbed-node-3] 2026-01-17 00:34:25.645821 | orchestrator | skipping: [testbed-node-4] 2026-01-17 00:34:25.645832 | orchestrator | skipping: [testbed-node-5] 2026-01-17 00:34:25.645842 | orchestrator | skipping: [testbed-node-0] 2026-01-17 00:34:25.645853 | orchestrator | skipping: [testbed-node-1] 2026-01-17 00:34:25.645863 | orchestrator | skipping: [testbed-node-2] 2026-01-17 00:34:25.645874 | orchestrator | 2026-01-17 00:34:25.645885 | orchestrator | TASK [osism.commons.docker_compose : Install docker-compose-plugin package] **** 2026-01-17 00:34:25.645896 | orchestrator | Saturday 17 January 2026 00:34:18 +0000 (0:00:00.523) 0:07:00.443 ****** 2026-01-17 00:34:25.645906 | orchestrator | ok: [testbed-manager] 2026-01-17 00:34:25.645917 | orchestrator | changed: [testbed-node-4] 2026-01-17 00:34:25.645927 | orchestrator | changed: [testbed-node-1] 2026-01-17 00:34:25.645938 | orchestrator | changed: [testbed-node-0] 2026-01-17 00:34:25.645956 | orchestrator | changed: [testbed-node-3] 2026-01-17 00:34:25.645966 | orchestrator | changed: [testbed-node-5] 2026-01-17 00:34:25.645989 | orchestrator | changed: [testbed-node-2] 2026-01-17 00:34:59.168077 | orchestrator | 2026-01-17 00:34:59.168189 | orchestrator | TASK [osism.commons.docker_compose : Copy osism.target systemd file] *********** 2026-01-17 00:34:59.168205 | orchestrator | Saturday 17 January 2026 00:34:25 +0000 (0:00:07.612) 0:07:08.055 ****** 2026-01-17 00:34:59.168215 | orchestrator | ok: [testbed-manager] 2026-01-17 00:34:59.168226 | orchestrator | changed: [testbed-node-3] 2026-01-17 00:34:59.168236 | orchestrator | changed: [testbed-node-4] 2026-01-17 00:34:59.168246 | orchestrator | changed: [testbed-node-5] 2026-01-17 00:34:59.168256 | orchestrator | changed: [testbed-node-0] 2026-01-17 00:34:59.168265 | orchestrator | changed: [testbed-node-1] 2026-01-17 00:34:59.168275 | orchestrator | changed: [testbed-node-2] 2026-01-17 00:34:59.168285 | orchestrator | 2026-01-17 00:34:59.168295 | orchestrator | TASK [osism.commons.docker_compose : Enable osism.target] ********************** 2026-01-17 00:34:59.168304 | orchestrator | Saturday 17 January 2026 00:34:27 +0000 (0:00:01.592) 0:07:09.647 ****** 2026-01-17 00:34:59.168314 | orchestrator | ok: [testbed-manager] 2026-01-17 00:34:59.168323 | orchestrator | changed: [testbed-node-3] 2026-01-17 00:34:59.168333 | orchestrator | changed: [testbed-node-4] 2026-01-17 00:34:59.168342 | orchestrator | changed: [testbed-node-5] 2026-01-17 00:34:59.168352 | orchestrator | changed: [testbed-node-0] 2026-01-17 00:34:59.168413 | orchestrator | changed: [testbed-node-1] 2026-01-17 00:34:59.168424 | orchestrator | changed: [testbed-node-2] 2026-01-17 00:34:59.168433 | orchestrator | 2026-01-17 00:34:59.168443 | orchestrator | TASK [osism.commons.docker_compose : Copy docker-compose systemd unit file] **** 2026-01-17 00:34:59.168454 | orchestrator | Saturday 17 January 2026 00:34:29 +0000 (0:00:01.857) 0:07:11.505 ****** 2026-01-17 00:34:59.168463 | orchestrator | ok: [testbed-manager] 2026-01-17 00:34:59.168473 | orchestrator | changed: [testbed-node-3] 2026-01-17 00:34:59.168482 | orchestrator | changed: [testbed-node-4] 2026-01-17 00:34:59.168492 | orchestrator | changed: [testbed-node-5] 2026-01-17 00:34:59.168501 | orchestrator | changed: [testbed-node-0] 2026-01-17 00:34:59.168511 | orchestrator | changed: [testbed-node-1] 2026-01-17 00:34:59.168520 | orchestrator | changed: [testbed-node-2] 2026-01-17 00:34:59.168530 | orchestrator | 2026-01-17 00:34:59.168540 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2026-01-17 00:34:59.168549 | orchestrator | Saturday 17 January 2026 00:34:30 +0000 (0:00:01.782) 0:07:13.288 ****** 2026-01-17 00:34:59.168559 | orchestrator | ok: [testbed-manager] 2026-01-17 00:34:59.168568 | orchestrator | ok: [testbed-node-3] 2026-01-17 00:34:59.168578 | orchestrator | ok: [testbed-node-4] 2026-01-17 00:34:59.168588 | orchestrator | ok: [testbed-node-5] 2026-01-17 00:34:59.168597 | orchestrator | ok: [testbed-node-0] 2026-01-17 00:34:59.168607 | orchestrator | ok: [testbed-node-1] 2026-01-17 00:34:59.168618 | orchestrator | ok: [testbed-node-2] 2026-01-17 00:34:59.168629 | orchestrator | 2026-01-17 00:34:59.168640 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2026-01-17 00:34:59.168651 | orchestrator | Saturday 17 January 2026 00:34:32 +0000 (0:00:01.185) 0:07:14.473 ****** 2026-01-17 00:34:59.168661 | orchestrator | skipping: [testbed-manager] 2026-01-17 00:34:59.168672 | orchestrator | skipping: [testbed-node-3] 2026-01-17 00:34:59.168683 | orchestrator | skipping: [testbed-node-4] 2026-01-17 00:34:59.168693 | orchestrator | skipping: [testbed-node-5] 2026-01-17 00:34:59.168704 | orchestrator | skipping: [testbed-node-0] 2026-01-17 00:34:59.168715 | orchestrator | skipping: [testbed-node-1] 2026-01-17 00:34:59.168725 | orchestrator | skipping: [testbed-node-2] 2026-01-17 00:34:59.168736 | orchestrator | 2026-01-17 00:34:59.168747 | orchestrator | TASK [osism.services.chrony : Check minimum and maximum number of servers] ***** 2026-01-17 00:34:59.168758 | orchestrator | Saturday 17 January 2026 00:34:33 +0000 (0:00:01.038) 0:07:15.511 ****** 2026-01-17 00:34:59.168769 | orchestrator | skipping: [testbed-manager] 2026-01-17 00:34:59.168780 | orchestrator | skipping: [testbed-node-3] 2026-01-17 00:34:59.168814 | orchestrator | skipping: [testbed-node-4] 2026-01-17 00:34:59.168826 | orchestrator | skipping: [testbed-node-5] 2026-01-17 00:34:59.168837 | orchestrator | skipping: [testbed-node-0] 2026-01-17 00:34:59.168848 | orchestrator | skipping: [testbed-node-1] 2026-01-17 00:34:59.168858 | orchestrator | skipping: [testbed-node-2] 2026-01-17 00:34:59.168870 | orchestrator | 2026-01-17 00:34:59.168881 | orchestrator | TASK [osism.services.chrony : Gather variables for each operating system] ****** 2026-01-17 00:34:59.168892 | orchestrator | Saturday 17 January 2026 00:34:33 +0000 (0:00:00.546) 0:07:16.058 ****** 2026-01-17 00:34:59.168903 | orchestrator | ok: [testbed-manager] 2026-01-17 00:34:59.168914 | orchestrator | ok: [testbed-node-3] 2026-01-17 00:34:59.168925 | orchestrator | ok: [testbed-node-4] 2026-01-17 00:34:59.168936 | orchestrator | ok: [testbed-node-5] 2026-01-17 00:34:59.168947 | orchestrator | ok: [testbed-node-0] 2026-01-17 00:34:59.168956 | orchestrator | ok: [testbed-node-1] 2026-01-17 00:34:59.168966 | orchestrator | ok: [testbed-node-2] 2026-01-17 00:34:59.168975 | orchestrator | 2026-01-17 00:34:59.168985 | orchestrator | TASK [osism.services.chrony : Set chrony_conf_file variable to default value] *** 2026-01-17 00:34:59.168994 | orchestrator | Saturday 17 January 2026 00:34:34 +0000 (0:00:00.576) 0:07:16.634 ****** 2026-01-17 00:34:59.169019 | orchestrator | ok: [testbed-manager] 2026-01-17 00:34:59.169035 | orchestrator | ok: [testbed-node-3] 2026-01-17 00:34:59.169052 | orchestrator | ok: [testbed-node-4] 2026-01-17 00:34:59.169067 | orchestrator | ok: [testbed-node-5] 2026-01-17 00:34:59.169082 | orchestrator | ok: [testbed-node-0] 2026-01-17 00:34:59.169099 | orchestrator | ok: [testbed-node-1] 2026-01-17 00:34:59.169115 | orchestrator | ok: [testbed-node-2] 2026-01-17 00:34:59.169132 | orchestrator | 2026-01-17 00:34:59.169148 | orchestrator | TASK [osism.services.chrony : Set chrony_key_file variable to default value] *** 2026-01-17 00:34:59.169164 | orchestrator | Saturday 17 January 2026 00:34:34 +0000 (0:00:00.536) 0:07:17.171 ****** 2026-01-17 00:34:59.169174 | orchestrator | ok: [testbed-manager] 2026-01-17 00:34:59.169184 | orchestrator | ok: [testbed-node-3] 2026-01-17 00:34:59.169193 | orchestrator | ok: [testbed-node-4] 2026-01-17 00:34:59.169203 | orchestrator | ok: [testbed-node-5] 2026-01-17 00:34:59.169212 | orchestrator | ok: [testbed-node-0] 2026-01-17 00:34:59.169221 | orchestrator | ok: [testbed-node-1] 2026-01-17 00:34:59.169231 | orchestrator | ok: [testbed-node-2] 2026-01-17 00:34:59.169240 | orchestrator | 2026-01-17 00:34:59.169250 | orchestrator | TASK [osism.services.chrony : Populate service facts] ************************** 2026-01-17 00:34:59.169259 | orchestrator | Saturday 17 January 2026 00:34:35 +0000 (0:00:00.740) 0:07:17.911 ****** 2026-01-17 00:34:59.169269 | orchestrator | ok: [testbed-manager] 2026-01-17 00:34:59.169278 | orchestrator | ok: [testbed-node-4] 2026-01-17 00:34:59.169287 | orchestrator | ok: [testbed-node-3] 2026-01-17 00:34:59.169297 | orchestrator | ok: [testbed-node-0] 2026-01-17 00:34:59.169306 | orchestrator | ok: [testbed-node-5] 2026-01-17 00:34:59.169317 | orchestrator | ok: [testbed-node-1] 2026-01-17 00:34:59.169331 | orchestrator | ok: [testbed-node-2] 2026-01-17 00:34:59.169341 | orchestrator | 2026-01-17 00:34:59.169388 | orchestrator | TASK [osism.services.chrony : Manage timesyncd service] ************************ 2026-01-17 00:34:59.169400 | orchestrator | Saturday 17 January 2026 00:34:40 +0000 (0:00:05.437) 0:07:23.349 ****** 2026-01-17 00:34:59.169410 | orchestrator | skipping: [testbed-manager] 2026-01-17 00:34:59.169419 | orchestrator | skipping: [testbed-node-3] 2026-01-17 00:34:59.169428 | orchestrator | skipping: [testbed-node-4] 2026-01-17 00:34:59.169438 | orchestrator | skipping: [testbed-node-5] 2026-01-17 00:34:59.169447 | orchestrator | skipping: [testbed-node-0] 2026-01-17 00:34:59.169457 | orchestrator | skipping: [testbed-node-1] 2026-01-17 00:34:59.169466 | orchestrator | skipping: [testbed-node-2] 2026-01-17 00:34:59.169476 | orchestrator | 2026-01-17 00:34:59.169485 | orchestrator | TASK [osism.services.chrony : Include distribution specific install tasks] ***** 2026-01-17 00:34:59.169495 | orchestrator | Saturday 17 January 2026 00:34:41 +0000 (0:00:00.572) 0:07:23.921 ****** 2026-01-17 00:34:59.169507 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-17 00:34:59.169528 | orchestrator | 2026-01-17 00:34:59.169538 | orchestrator | TASK [osism.services.chrony : Install package] ********************************* 2026-01-17 00:34:59.169547 | orchestrator | Saturday 17 January 2026 00:34:42 +0000 (0:00:01.074) 0:07:24.996 ****** 2026-01-17 00:34:59.169557 | orchestrator | ok: [testbed-manager] 2026-01-17 00:34:59.169566 | orchestrator | ok: [testbed-node-3] 2026-01-17 00:34:59.169576 | orchestrator | ok: [testbed-node-4] 2026-01-17 00:34:59.169585 | orchestrator | ok: [testbed-node-5] 2026-01-17 00:34:59.169594 | orchestrator | ok: [testbed-node-0] 2026-01-17 00:34:59.169604 | orchestrator | ok: [testbed-node-1] 2026-01-17 00:34:59.169613 | orchestrator | ok: [testbed-node-2] 2026-01-17 00:34:59.169622 | orchestrator | 2026-01-17 00:34:59.169632 | orchestrator | TASK [osism.services.chrony : Manage chrony service] *************************** 2026-01-17 00:34:59.169641 | orchestrator | Saturday 17 January 2026 00:34:44 +0000 (0:00:01.935) 0:07:26.931 ****** 2026-01-17 00:34:59.169651 | orchestrator | ok: [testbed-manager] 2026-01-17 00:34:59.169660 | orchestrator | ok: [testbed-node-3] 2026-01-17 00:34:59.169669 | orchestrator | ok: [testbed-node-4] 2026-01-17 00:34:59.169679 | orchestrator | ok: [testbed-node-5] 2026-01-17 00:34:59.169688 | orchestrator | ok: [testbed-node-0] 2026-01-17 00:34:59.169697 | orchestrator | ok: [testbed-node-1] 2026-01-17 00:34:59.169707 | orchestrator | ok: [testbed-node-2] 2026-01-17 00:34:59.169716 | orchestrator | 2026-01-17 00:34:59.169726 | orchestrator | TASK [osism.services.chrony : Check if configuration file exists] ************** 2026-01-17 00:34:59.169735 | orchestrator | Saturday 17 January 2026 00:34:45 +0000 (0:00:01.209) 0:07:28.141 ****** 2026-01-17 00:34:59.169745 | orchestrator | ok: [testbed-manager] 2026-01-17 00:34:59.169754 | orchestrator | ok: [testbed-node-3] 2026-01-17 00:34:59.169764 | orchestrator | ok: [testbed-node-4] 2026-01-17 00:34:59.169773 | orchestrator | ok: [testbed-node-5] 2026-01-17 00:34:59.169782 | orchestrator | ok: [testbed-node-0] 2026-01-17 00:34:59.169792 | orchestrator | ok: [testbed-node-1] 2026-01-17 00:34:59.169801 | orchestrator | ok: [testbed-node-2] 2026-01-17 00:34:59.169811 | orchestrator | 2026-01-17 00:34:59.169820 | orchestrator | TASK [osism.services.chrony : Copy configuration file] ************************* 2026-01-17 00:34:59.169830 | orchestrator | Saturday 17 January 2026 00:34:46 +0000 (0:00:00.919) 0:07:29.061 ****** 2026-01-17 00:34:59.169840 | orchestrator | changed: [testbed-manager] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-01-17 00:34:59.169851 | orchestrator | changed: [testbed-node-3] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-01-17 00:34:59.169860 | orchestrator | changed: [testbed-node-4] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-01-17 00:34:59.169870 | orchestrator | changed: [testbed-node-5] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-01-17 00:34:59.169879 | orchestrator | changed: [testbed-node-0] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-01-17 00:34:59.169889 | orchestrator | changed: [testbed-node-2] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-01-17 00:34:59.169899 | orchestrator | changed: [testbed-node-1] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-01-17 00:34:59.169908 | orchestrator | 2026-01-17 00:34:59.169918 | orchestrator | TASK [osism.services.lldpd : Include distribution specific install tasks] ****** 2026-01-17 00:34:59.169927 | orchestrator | Saturday 17 January 2026 00:34:48 +0000 (0:00:01.932) 0:07:30.993 ****** 2026-01-17 00:34:59.169937 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/lldpd/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-17 00:34:59.169953 | orchestrator | 2026-01-17 00:34:59.169963 | orchestrator | TASK [osism.services.lldpd : Install lldpd package] **************************** 2026-01-17 00:34:59.169973 | orchestrator | Saturday 17 January 2026 00:34:49 +0000 (0:00:00.866) 0:07:31.860 ****** 2026-01-17 00:34:59.169982 | orchestrator | changed: [testbed-node-4] 2026-01-17 00:34:59.169992 | orchestrator | changed: [testbed-node-0] 2026-01-17 00:34:59.170001 | orchestrator | changed: [testbed-node-3] 2026-01-17 00:34:59.170011 | orchestrator | changed: [testbed-manager] 2026-01-17 00:34:59.170078 | orchestrator | changed: [testbed-node-1] 2026-01-17 00:34:59.170089 | orchestrator | changed: [testbed-node-5] 2026-01-17 00:34:59.170098 | orchestrator | changed: [testbed-node-2] 2026-01-17 00:34:59.170108 | orchestrator | 2026-01-17 00:34:59.170124 | orchestrator | TASK [osism.services.lldpd : Manage lldpd service] ***************************** 2026-01-17 00:35:30.589563 | orchestrator | Saturday 17 January 2026 00:34:59 +0000 (0:00:09.718) 0:07:41.578 ****** 2026-01-17 00:35:30.589665 | orchestrator | ok: [testbed-manager] 2026-01-17 00:35:30.589677 | orchestrator | ok: [testbed-node-4] 2026-01-17 00:35:30.589684 | orchestrator | ok: [testbed-node-3] 2026-01-17 00:35:30.589692 | orchestrator | ok: [testbed-node-5] 2026-01-17 00:35:30.589702 | orchestrator | ok: [testbed-node-0] 2026-01-17 00:35:30.589711 | orchestrator | ok: [testbed-node-1] 2026-01-17 00:35:30.589720 | orchestrator | ok: [testbed-node-2] 2026-01-17 00:35:30.589729 | orchestrator | 2026-01-17 00:35:30.589739 | orchestrator | RUNNING HANDLER [osism.commons.docker_compose : Reload systemd daemon] ********* 2026-01-17 00:35:30.589748 | orchestrator | Saturday 17 January 2026 00:35:01 +0000 (0:00:02.077) 0:07:43.655 ****** 2026-01-17 00:35:30.589758 | orchestrator | ok: [testbed-node-3] 2026-01-17 00:35:30.589764 | orchestrator | ok: [testbed-node-4] 2026-01-17 00:35:30.589770 | orchestrator | ok: [testbed-node-5] 2026-01-17 00:35:30.589776 | orchestrator | ok: [testbed-node-0] 2026-01-17 00:35:30.589781 | orchestrator | ok: [testbed-node-1] 2026-01-17 00:35:30.589786 | orchestrator | ok: [testbed-node-2] 2026-01-17 00:35:30.589792 | orchestrator | 2026-01-17 00:35:30.589797 | orchestrator | RUNNING HANDLER [osism.services.chrony : Restart chrony service] *************** 2026-01-17 00:35:30.589803 | orchestrator | Saturday 17 January 2026 00:35:02 +0000 (0:00:01.309) 0:07:44.965 ****** 2026-01-17 00:35:30.589809 | orchestrator | changed: [testbed-manager] 2026-01-17 00:35:30.589815 | orchestrator | changed: [testbed-node-3] 2026-01-17 00:35:30.589820 | orchestrator | changed: [testbed-node-4] 2026-01-17 00:35:30.589826 | orchestrator | changed: [testbed-node-5] 2026-01-17 00:35:30.589831 | orchestrator | changed: [testbed-node-0] 2026-01-17 00:35:30.589836 | orchestrator | changed: [testbed-node-1] 2026-01-17 00:35:30.589841 | orchestrator | changed: [testbed-node-2] 2026-01-17 00:35:30.589847 | orchestrator | 2026-01-17 00:35:30.589854 | orchestrator | PLAY [Apply bootstrap role part 2] ********************************************* 2026-01-17 00:35:30.589864 | orchestrator | 2026-01-17 00:35:30.589869 | orchestrator | TASK [Include hardening role] ************************************************** 2026-01-17 00:35:30.589875 | orchestrator | Saturday 17 January 2026 00:35:03 +0000 (0:00:01.356) 0:07:46.321 ****** 2026-01-17 00:35:30.589880 | orchestrator | skipping: [testbed-manager] 2026-01-17 00:35:30.589885 | orchestrator | skipping: [testbed-node-3] 2026-01-17 00:35:30.589891 | orchestrator | skipping: [testbed-node-4] 2026-01-17 00:35:30.589896 | orchestrator | skipping: [testbed-node-5] 2026-01-17 00:35:30.589901 | orchestrator | skipping: [testbed-node-0] 2026-01-17 00:35:30.589906 | orchestrator | skipping: [testbed-node-1] 2026-01-17 00:35:30.589911 | orchestrator | skipping: [testbed-node-2] 2026-01-17 00:35:30.589917 | orchestrator | 2026-01-17 00:35:30.589922 | orchestrator | PLAY [Apply bootstrap roles part 3] ******************************************** 2026-01-17 00:35:30.589927 | orchestrator | 2026-01-17 00:35:30.589933 | orchestrator | TASK [osism.services.journald : Copy configuration file] *********************** 2026-01-17 00:35:30.589938 | orchestrator | Saturday 17 January 2026 00:35:04 +0000 (0:00:00.749) 0:07:47.071 ****** 2026-01-17 00:35:30.589957 | orchestrator | changed: [testbed-manager] 2026-01-17 00:35:30.589964 | orchestrator | changed: [testbed-node-3] 2026-01-17 00:35:30.589973 | orchestrator | changed: [testbed-node-4] 2026-01-17 00:35:30.589981 | orchestrator | changed: [testbed-node-5] 2026-01-17 00:35:30.589990 | orchestrator | changed: [testbed-node-0] 2026-01-17 00:35:30.589998 | orchestrator | changed: [testbed-node-1] 2026-01-17 00:35:30.590006 | orchestrator | changed: [testbed-node-2] 2026-01-17 00:35:30.590080 | orchestrator | 2026-01-17 00:35:30.590094 | orchestrator | TASK [osism.services.journald : Manage journald service] *********************** 2026-01-17 00:35:30.590101 | orchestrator | Saturday 17 January 2026 00:35:05 +0000 (0:00:01.349) 0:07:48.420 ****** 2026-01-17 00:35:30.590107 | orchestrator | ok: [testbed-manager] 2026-01-17 00:35:30.590120 | orchestrator | ok: [testbed-node-3] 2026-01-17 00:35:30.590126 | orchestrator | ok: [testbed-node-4] 2026-01-17 00:35:30.590133 | orchestrator | ok: [testbed-node-5] 2026-01-17 00:35:30.590139 | orchestrator | ok: [testbed-node-0] 2026-01-17 00:35:30.590145 | orchestrator | ok: [testbed-node-1] 2026-01-17 00:35:30.590151 | orchestrator | ok: [testbed-node-2] 2026-01-17 00:35:30.590157 | orchestrator | 2026-01-17 00:35:30.590163 | orchestrator | TASK [Include auditd role] ***************************************************** 2026-01-17 00:35:30.590169 | orchestrator | Saturday 17 January 2026 00:35:07 +0000 (0:00:01.477) 0:07:49.898 ****** 2026-01-17 00:35:30.590175 | orchestrator | skipping: [testbed-manager] 2026-01-17 00:35:30.590181 | orchestrator | skipping: [testbed-node-3] 2026-01-17 00:35:30.590187 | orchestrator | skipping: [testbed-node-4] 2026-01-17 00:35:30.590193 | orchestrator | skipping: [testbed-node-5] 2026-01-17 00:35:30.590199 | orchestrator | skipping: [testbed-node-0] 2026-01-17 00:35:30.590205 | orchestrator | skipping: [testbed-node-1] 2026-01-17 00:35:30.590212 | orchestrator | skipping: [testbed-node-2] 2026-01-17 00:35:30.590218 | orchestrator | 2026-01-17 00:35:30.590224 | orchestrator | TASK [Include smartd role] ***************************************************** 2026-01-17 00:35:30.590235 | orchestrator | Saturday 17 January 2026 00:35:08 +0000 (0:00:00.568) 0:07:50.467 ****** 2026-01-17 00:35:30.590241 | orchestrator | included: osism.services.smartd for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-17 00:35:30.590248 | orchestrator | 2026-01-17 00:35:30.590253 | orchestrator | TASK [osism.services.smartd : Include distribution specific install tasks] ***** 2026-01-17 00:35:30.590259 | orchestrator | Saturday 17 January 2026 00:35:09 +0000 (0:00:01.021) 0:07:51.488 ****** 2026-01-17 00:35:30.590266 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/smartd/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-17 00:35:30.590274 | orchestrator | 2026-01-17 00:35:30.590279 | orchestrator | TASK [osism.services.smartd : Install smartmontools package] ******************* 2026-01-17 00:35:30.590284 | orchestrator | Saturday 17 January 2026 00:35:09 +0000 (0:00:00.833) 0:07:52.322 ****** 2026-01-17 00:35:30.590289 | orchestrator | changed: [testbed-node-3] 2026-01-17 00:35:30.590295 | orchestrator | changed: [testbed-node-0] 2026-01-17 00:35:30.590300 | orchestrator | changed: [testbed-node-4] 2026-01-17 00:35:30.590342 | orchestrator | changed: [testbed-manager] 2026-01-17 00:35:30.590349 | orchestrator | changed: [testbed-node-1] 2026-01-17 00:35:30.590354 | orchestrator | changed: [testbed-node-2] 2026-01-17 00:35:30.590359 | orchestrator | changed: [testbed-node-5] 2026-01-17 00:35:30.590364 | orchestrator | 2026-01-17 00:35:30.590383 | orchestrator | TASK [osism.services.smartd : Create /var/log/smartd directory] **************** 2026-01-17 00:35:30.590389 | orchestrator | Saturday 17 January 2026 00:35:18 +0000 (0:00:08.506) 0:08:00.829 ****** 2026-01-17 00:35:30.590394 | orchestrator | changed: [testbed-manager] 2026-01-17 00:35:30.590399 | orchestrator | changed: [testbed-node-3] 2026-01-17 00:35:30.590404 | orchestrator | changed: [testbed-node-4] 2026-01-17 00:35:30.590410 | orchestrator | changed: [testbed-node-5] 2026-01-17 00:35:30.590422 | orchestrator | changed: [testbed-node-0] 2026-01-17 00:35:30.590427 | orchestrator | changed: [testbed-node-1] 2026-01-17 00:35:30.590433 | orchestrator | changed: [testbed-node-2] 2026-01-17 00:35:30.590438 | orchestrator | 2026-01-17 00:35:30.590443 | orchestrator | TASK [osism.services.smartd : Copy smartmontools configuration file] *********** 2026-01-17 00:35:30.590448 | orchestrator | Saturday 17 January 2026 00:35:19 +0000 (0:00:01.103) 0:08:01.932 ****** 2026-01-17 00:35:30.590454 | orchestrator | changed: [testbed-manager] 2026-01-17 00:35:30.590459 | orchestrator | changed: [testbed-node-4] 2026-01-17 00:35:30.590464 | orchestrator | changed: [testbed-node-3] 2026-01-17 00:35:30.590470 | orchestrator | changed: [testbed-node-5] 2026-01-17 00:35:30.590475 | orchestrator | changed: [testbed-node-0] 2026-01-17 00:35:30.590480 | orchestrator | changed: [testbed-node-1] 2026-01-17 00:35:30.590485 | orchestrator | changed: [testbed-node-2] 2026-01-17 00:35:30.590490 | orchestrator | 2026-01-17 00:35:30.590496 | orchestrator | TASK [osism.services.smartd : Manage smartd service] *************************** 2026-01-17 00:35:30.590501 | orchestrator | Saturday 17 January 2026 00:35:20 +0000 (0:00:01.377) 0:08:03.310 ****** 2026-01-17 00:35:30.590507 | orchestrator | changed: [testbed-manager] 2026-01-17 00:35:30.590512 | orchestrator | changed: [testbed-node-3] 2026-01-17 00:35:30.590517 | orchestrator | changed: [testbed-node-4] 2026-01-17 00:35:30.590522 | orchestrator | changed: [testbed-node-5] 2026-01-17 00:35:30.590527 | orchestrator | changed: [testbed-node-0] 2026-01-17 00:35:30.590533 | orchestrator | changed: [testbed-node-1] 2026-01-17 00:35:30.590538 | orchestrator | changed: [testbed-node-2] 2026-01-17 00:35:30.590543 | orchestrator | 2026-01-17 00:35:30.590549 | orchestrator | RUNNING HANDLER [osism.services.journald : Restart journald service] *********** 2026-01-17 00:35:30.590554 | orchestrator | Saturday 17 January 2026 00:35:22 +0000 (0:00:02.070) 0:08:05.380 ****** 2026-01-17 00:35:30.590559 | orchestrator | changed: [testbed-manager] 2026-01-17 00:35:30.590564 | orchestrator | changed: [testbed-node-3] 2026-01-17 00:35:30.590570 | orchestrator | changed: [testbed-node-4] 2026-01-17 00:35:30.590575 | orchestrator | changed: [testbed-node-5] 2026-01-17 00:35:30.590580 | orchestrator | changed: [testbed-node-0] 2026-01-17 00:35:30.590585 | orchestrator | changed: [testbed-node-1] 2026-01-17 00:35:30.590591 | orchestrator | changed: [testbed-node-2] 2026-01-17 00:35:30.590596 | orchestrator | 2026-01-17 00:35:30.590601 | orchestrator | RUNNING HANDLER [osism.services.smartd : Restart smartd service] *************** 2026-01-17 00:35:30.590606 | orchestrator | Saturday 17 January 2026 00:35:24 +0000 (0:00:01.263) 0:08:06.644 ****** 2026-01-17 00:35:30.590612 | orchestrator | changed: [testbed-manager] 2026-01-17 00:35:30.590617 | orchestrator | changed: [testbed-node-3] 2026-01-17 00:35:30.590622 | orchestrator | changed: [testbed-node-4] 2026-01-17 00:35:30.590628 | orchestrator | changed: [testbed-node-5] 2026-01-17 00:35:30.590633 | orchestrator | changed: [testbed-node-0] 2026-01-17 00:35:30.590638 | orchestrator | changed: [testbed-node-1] 2026-01-17 00:35:30.590647 | orchestrator | changed: [testbed-node-2] 2026-01-17 00:35:30.590657 | orchestrator | 2026-01-17 00:35:30.590665 | orchestrator | PLAY [Set state bootstrap] ***************************************************** 2026-01-17 00:35:30.590673 | orchestrator | 2026-01-17 00:35:30.590682 | orchestrator | TASK [Set osism.bootstrap.status fact] ***************************************** 2026-01-17 00:35:30.590690 | orchestrator | Saturday 17 January 2026 00:35:25 +0000 (0:00:01.162) 0:08:07.806 ****** 2026-01-17 00:35:30.590699 | orchestrator | included: osism.commons.state for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-17 00:35:30.590707 | orchestrator | 2026-01-17 00:35:30.590716 | orchestrator | TASK [osism.commons.state : Create custom facts directory] ********************* 2026-01-17 00:35:30.590724 | orchestrator | Saturday 17 January 2026 00:35:26 +0000 (0:00:00.862) 0:08:08.669 ****** 2026-01-17 00:35:30.590732 | orchestrator | ok: [testbed-manager] 2026-01-17 00:35:30.590740 | orchestrator | ok: [testbed-node-3] 2026-01-17 00:35:30.590748 | orchestrator | ok: [testbed-node-4] 2026-01-17 00:35:30.590762 | orchestrator | ok: [testbed-node-5] 2026-01-17 00:35:30.590771 | orchestrator | ok: [testbed-node-0] 2026-01-17 00:35:30.590779 | orchestrator | ok: [testbed-node-1] 2026-01-17 00:35:30.590787 | orchestrator | ok: [testbed-node-2] 2026-01-17 00:35:30.590795 | orchestrator | 2026-01-17 00:35:30.590802 | orchestrator | TASK [osism.commons.state : Write state into file] ***************************** 2026-01-17 00:35:30.590810 | orchestrator | Saturday 17 January 2026 00:35:27 +0000 (0:00:01.118) 0:08:09.787 ****** 2026-01-17 00:35:30.590823 | orchestrator | changed: [testbed-manager] 2026-01-17 00:35:30.590832 | orchestrator | changed: [testbed-node-4] 2026-01-17 00:35:30.590839 | orchestrator | changed: [testbed-node-3] 2026-01-17 00:35:30.590848 | orchestrator | changed: [testbed-node-5] 2026-01-17 00:35:30.590856 | orchestrator | changed: [testbed-node-0] 2026-01-17 00:35:30.590865 | orchestrator | changed: [testbed-node-1] 2026-01-17 00:35:30.590873 | orchestrator | changed: [testbed-node-2] 2026-01-17 00:35:30.590882 | orchestrator | 2026-01-17 00:35:30.590892 | orchestrator | TASK [Set osism.bootstrap.timestamp fact] ************************************** 2026-01-17 00:35:30.590901 | orchestrator | Saturday 17 January 2026 00:35:28 +0000 (0:00:01.209) 0:08:10.996 ****** 2026-01-17 00:35:30.590910 | orchestrator | included: osism.commons.state for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-17 00:35:30.590918 | orchestrator | 2026-01-17 00:35:30.590927 | orchestrator | TASK [osism.commons.state : Create custom facts directory] ********************* 2026-01-17 00:35:30.590936 | orchestrator | Saturday 17 January 2026 00:35:29 +0000 (0:00:00.849) 0:08:11.846 ****** 2026-01-17 00:35:30.590944 | orchestrator | ok: [testbed-manager] 2026-01-17 00:35:30.590953 | orchestrator | ok: [testbed-node-3] 2026-01-17 00:35:30.590962 | orchestrator | ok: [testbed-node-4] 2026-01-17 00:35:30.590971 | orchestrator | ok: [testbed-node-0] 2026-01-17 00:35:30.590980 | orchestrator | ok: [testbed-node-5] 2026-01-17 00:35:30.590989 | orchestrator | ok: [testbed-node-1] 2026-01-17 00:35:30.590999 | orchestrator | ok: [testbed-node-2] 2026-01-17 00:35:30.591005 | orchestrator | 2026-01-17 00:35:30.591016 | orchestrator | TASK [osism.commons.state : Write state into file] ***************************** 2026-01-17 00:35:32.260109 | orchestrator | Saturday 17 January 2026 00:35:30 +0000 (0:00:01.155) 0:08:13.002 ****** 2026-01-17 00:35:32.260224 | orchestrator | changed: [testbed-manager] 2026-01-17 00:35:32.260250 | orchestrator | changed: [testbed-node-3] 2026-01-17 00:35:32.260271 | orchestrator | changed: [testbed-node-4] 2026-01-17 00:35:32.260290 | orchestrator | changed: [testbed-node-5] 2026-01-17 00:35:32.260388 | orchestrator | changed: [testbed-node-0] 2026-01-17 00:35:32.260402 | orchestrator | changed: [testbed-node-1] 2026-01-17 00:35:32.260413 | orchestrator | changed: [testbed-node-2] 2026-01-17 00:35:32.260424 | orchestrator | 2026-01-17 00:35:32.260436 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-17 00:35:32.260448 | orchestrator | testbed-manager : ok=168  changed=40  unreachable=0 failed=0 skipped=42  rescued=0 ignored=0 2026-01-17 00:35:32.260461 | orchestrator | testbed-node-0 : ok=177  changed=69  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2026-01-17 00:35:32.260472 | orchestrator | testbed-node-1 : ok=177  changed=69  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2026-01-17 00:35:32.260483 | orchestrator | testbed-node-2 : ok=177  changed=69  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2026-01-17 00:35:32.260493 | orchestrator | testbed-node-3 : ok=175  changed=65  unreachable=0 failed=0 skipped=38  rescued=0 ignored=0 2026-01-17 00:35:32.260504 | orchestrator | testbed-node-4 : ok=175  changed=65  unreachable=0 failed=0 skipped=37  rescued=0 ignored=0 2026-01-17 00:35:32.260515 | orchestrator | testbed-node-5 : ok=175  changed=65  unreachable=0 failed=0 skipped=37  rescued=0 ignored=0 2026-01-17 00:35:32.260554 | orchestrator | 2026-01-17 00:35:32.260566 | orchestrator | 2026-01-17 00:35:32.260577 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-17 00:35:32.260588 | orchestrator | Saturday 17 January 2026 00:35:31 +0000 (0:00:01.127) 0:08:14.129 ****** 2026-01-17 00:35:32.260598 | orchestrator | =============================================================================== 2026-01-17 00:35:32.260609 | orchestrator | osism.commons.packages : Install required packages --------------------- 73.16s 2026-01-17 00:35:32.260620 | orchestrator | osism.commons.packages : Download required packages -------------------- 34.46s 2026-01-17 00:35:32.260630 | orchestrator | osism.commons.cleanup : Cleanup installed packages --------------------- 33.40s 2026-01-17 00:35:32.260641 | orchestrator | osism.commons.repository : Update package cache ------------------------ 16.74s 2026-01-17 00:35:32.260652 | orchestrator | osism.commons.packages : Remove dependencies that are no longer required -- 13.17s 2026-01-17 00:35:32.260663 | orchestrator | osism.commons.systohc : Install util-linux-extra package --------------- 13.13s 2026-01-17 00:35:32.260673 | orchestrator | osism.services.docker : Install docker package ------------------------- 11.26s 2026-01-17 00:35:32.260684 | orchestrator | osism.services.docker : Install containerd package ---------------------- 9.87s 2026-01-17 00:35:32.260695 | orchestrator | osism.services.lldpd : Install lldpd package ---------------------------- 9.72s 2026-01-17 00:35:32.260705 | orchestrator | osism.services.docker : Install docker-cli package ---------------------- 9.67s 2026-01-17 00:35:32.260716 | orchestrator | osism.commons.cleanup : Remove cloudinit package ------------------------ 8.82s 2026-01-17 00:35:32.260726 | orchestrator | osism.services.smartd : Install smartmontools package ------------------- 8.51s 2026-01-17 00:35:32.260737 | orchestrator | osism.services.rng : Install rng package -------------------------------- 8.44s 2026-01-17 00:35:32.260747 | orchestrator | osism.services.docker : Add repository ---------------------------------- 8.01s 2026-01-17 00:35:32.260758 | orchestrator | osism.commons.cleanup : Uninstall unattended-upgrades package ----------- 7.81s 2026-01-17 00:35:32.260784 | orchestrator | osism.commons.docker_compose : Install docker-compose-plugin package ---- 7.61s 2026-01-17 00:35:32.260796 | orchestrator | osism.services.docker : Install apt-transport-https package ------------- 6.82s 2026-01-17 00:35:32.260807 | orchestrator | osism.commons.cleanup : Remove dependencies that are no longer required --- 6.25s 2026-01-17 00:35:32.260818 | orchestrator | osism.commons.sysctl : Set sysctl parameters on rabbitmq ---------------- 6.16s 2026-01-17 00:35:32.260829 | orchestrator | osism.services.chrony : Populate service facts -------------------------- 5.44s 2026-01-17 00:35:32.622692 | orchestrator | + osism apply fail2ban 2026-01-17 00:35:45.584453 | orchestrator | 2026-01-17 00:35:45 | INFO  | Task 4176a5a8-2c53-45c4-a674-bfb4fd3b526b (fail2ban) was prepared for execution. 2026-01-17 00:35:45.584566 | orchestrator | 2026-01-17 00:35:45 | INFO  | It takes a moment until task 4176a5a8-2c53-45c4-a674-bfb4fd3b526b (fail2ban) has been started and output is visible here. 2026-01-17 00:36:07.802953 | orchestrator | 2026-01-17 00:36:07.803042 | orchestrator | PLAY [Apply role fail2ban] ***************************************************** 2026-01-17 00:36:07.803057 | orchestrator | 2026-01-17 00:36:07.803068 | orchestrator | TASK [osism.services.fail2ban : Include distribution specific install tasks] *** 2026-01-17 00:36:07.803080 | orchestrator | Saturday 17 January 2026 00:35:50 +0000 (0:00:00.291) 0:00:00.291 ****** 2026-01-17 00:36:07.803093 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/fail2ban/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-01-17 00:36:07.803107 | orchestrator | 2026-01-17 00:36:07.803118 | orchestrator | TASK [osism.services.fail2ban : Install fail2ban package] ********************** 2026-01-17 00:36:07.803129 | orchestrator | Saturday 17 January 2026 00:35:51 +0000 (0:00:01.158) 0:00:01.450 ****** 2026-01-17 00:36:07.803166 | orchestrator | changed: [testbed-node-1] 2026-01-17 00:36:07.803178 | orchestrator | changed: [testbed-manager] 2026-01-17 00:36:07.803189 | orchestrator | changed: [testbed-node-0] 2026-01-17 00:36:07.803200 | orchestrator | changed: [testbed-node-2] 2026-01-17 00:36:07.803211 | orchestrator | changed: [testbed-node-4] 2026-01-17 00:36:07.803222 | orchestrator | changed: [testbed-node-3] 2026-01-17 00:36:07.803232 | orchestrator | changed: [testbed-node-5] 2026-01-17 00:36:07.803268 | orchestrator | 2026-01-17 00:36:07.803280 | orchestrator | TASK [osism.services.fail2ban : Copy configuration files] ********************** 2026-01-17 00:36:07.803291 | orchestrator | Saturday 17 January 2026 00:36:02 +0000 (0:00:11.281) 0:00:12.731 ****** 2026-01-17 00:36:07.803302 | orchestrator | changed: [testbed-manager] 2026-01-17 00:36:07.803313 | orchestrator | changed: [testbed-node-0] 2026-01-17 00:36:07.803323 | orchestrator | changed: [testbed-node-1] 2026-01-17 00:36:07.803334 | orchestrator | changed: [testbed-node-2] 2026-01-17 00:36:07.803344 | orchestrator | changed: [testbed-node-3] 2026-01-17 00:36:07.803355 | orchestrator | changed: [testbed-node-4] 2026-01-17 00:36:07.803365 | orchestrator | changed: [testbed-node-5] 2026-01-17 00:36:07.803376 | orchestrator | 2026-01-17 00:36:07.803387 | orchestrator | TASK [osism.services.fail2ban : Manage fail2ban service] *********************** 2026-01-17 00:36:07.803397 | orchestrator | Saturday 17 January 2026 00:36:04 +0000 (0:00:01.592) 0:00:14.323 ****** 2026-01-17 00:36:07.803408 | orchestrator | ok: [testbed-node-0] 2026-01-17 00:36:07.803420 | orchestrator | ok: [testbed-node-1] 2026-01-17 00:36:07.803431 | orchestrator | ok: [testbed-node-2] 2026-01-17 00:36:07.803441 | orchestrator | ok: [testbed-manager] 2026-01-17 00:36:07.803452 | orchestrator | ok: [testbed-node-3] 2026-01-17 00:36:07.803462 | orchestrator | ok: [testbed-node-4] 2026-01-17 00:36:07.803473 | orchestrator | ok: [testbed-node-5] 2026-01-17 00:36:07.803483 | orchestrator | 2026-01-17 00:36:07.803494 | orchestrator | TASK [osism.services.fail2ban : Reload fail2ban configuration] ***************** 2026-01-17 00:36:07.803505 | orchestrator | Saturday 17 January 2026 00:36:05 +0000 (0:00:01.577) 0:00:15.900 ****** 2026-01-17 00:36:07.803517 | orchestrator | changed: [testbed-manager] 2026-01-17 00:36:07.803530 | orchestrator | changed: [testbed-node-0] 2026-01-17 00:36:07.803541 | orchestrator | changed: [testbed-node-1] 2026-01-17 00:36:07.803553 | orchestrator | changed: [testbed-node-2] 2026-01-17 00:36:07.803566 | orchestrator | changed: [testbed-node-3] 2026-01-17 00:36:07.803578 | orchestrator | changed: [testbed-node-4] 2026-01-17 00:36:07.803591 | orchestrator | changed: [testbed-node-5] 2026-01-17 00:36:07.803603 | orchestrator | 2026-01-17 00:36:07.803615 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-17 00:36:07.803628 | orchestrator | testbed-manager : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-17 00:36:07.803642 | orchestrator | testbed-node-0 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-17 00:36:07.803654 | orchestrator | testbed-node-1 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-17 00:36:07.803667 | orchestrator | testbed-node-2 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-17 00:36:07.803680 | orchestrator | testbed-node-3 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-17 00:36:07.803692 | orchestrator | testbed-node-4 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-17 00:36:07.803705 | orchestrator | testbed-node-5 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-17 00:36:07.803718 | orchestrator | 2026-01-17 00:36:07.803728 | orchestrator | 2026-01-17 00:36:07.803739 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-17 00:36:07.803772 | orchestrator | Saturday 17 January 2026 00:36:07 +0000 (0:00:01.568) 0:00:17.469 ****** 2026-01-17 00:36:07.803784 | orchestrator | =============================================================================== 2026-01-17 00:36:07.803795 | orchestrator | osism.services.fail2ban : Install fail2ban package --------------------- 11.28s 2026-01-17 00:36:07.803805 | orchestrator | osism.services.fail2ban : Copy configuration files ---------------------- 1.59s 2026-01-17 00:36:07.803816 | orchestrator | osism.services.fail2ban : Manage fail2ban service ----------------------- 1.58s 2026-01-17 00:36:07.803827 | orchestrator | osism.services.fail2ban : Reload fail2ban configuration ----------------- 1.57s 2026-01-17 00:36:07.803838 | orchestrator | osism.services.fail2ban : Include distribution specific install tasks --- 1.16s 2026-01-17 00:36:08.029334 | orchestrator | + [[ -e /etc/redhat-release ]] 2026-01-17 00:36:08.029416 | orchestrator | + osism apply network 2026-01-17 00:36:19.957780 | orchestrator | 2026-01-17 00:36:19 | INFO  | Task 3f99eda9-1158-4c38-8d3b-31beb7f71b1e (network) was prepared for execution. 2026-01-17 00:36:19.957892 | orchestrator | 2026-01-17 00:36:19 | INFO  | It takes a moment until task 3f99eda9-1158-4c38-8d3b-31beb7f71b1e (network) has been started and output is visible here. 2026-01-17 00:36:50.153279 | orchestrator | 2026-01-17 00:36:50.153394 | orchestrator | PLAY [Apply role network] ****************************************************** 2026-01-17 00:36:50.153411 | orchestrator | 2026-01-17 00:36:50.153423 | orchestrator | TASK [osism.commons.network : Gather variables for each operating system] ****** 2026-01-17 00:36:50.153435 | orchestrator | Saturday 17 January 2026 00:36:24 +0000 (0:00:00.273) 0:00:00.273 ****** 2026-01-17 00:36:50.153446 | orchestrator | ok: [testbed-manager] 2026-01-17 00:36:50.153458 | orchestrator | ok: [testbed-node-0] 2026-01-17 00:36:50.153469 | orchestrator | ok: [testbed-node-1] 2026-01-17 00:36:50.153481 | orchestrator | ok: [testbed-node-2] 2026-01-17 00:36:50.153492 | orchestrator | ok: [testbed-node-3] 2026-01-17 00:36:50.153503 | orchestrator | ok: [testbed-node-4] 2026-01-17 00:36:50.153514 | orchestrator | ok: [testbed-node-5] 2026-01-17 00:36:50.153524 | orchestrator | 2026-01-17 00:36:50.153535 | orchestrator | TASK [osism.commons.network : Include type specific tasks] ********************* 2026-01-17 00:36:50.153547 | orchestrator | Saturday 17 January 2026 00:36:25 +0000 (0:00:00.750) 0:00:01.023 ****** 2026-01-17 00:36:50.153559 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/netplan-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-01-17 00:36:50.153573 | orchestrator | 2026-01-17 00:36:50.153584 | orchestrator | TASK [osism.commons.network : Install required packages] *********************** 2026-01-17 00:36:50.153595 | orchestrator | Saturday 17 January 2026 00:36:26 +0000 (0:00:01.244) 0:00:02.268 ****** 2026-01-17 00:36:50.153606 | orchestrator | ok: [testbed-node-1] 2026-01-17 00:36:50.153617 | orchestrator | ok: [testbed-manager] 2026-01-17 00:36:50.153627 | orchestrator | ok: [testbed-node-0] 2026-01-17 00:36:50.153638 | orchestrator | ok: [testbed-node-2] 2026-01-17 00:36:50.153649 | orchestrator | ok: [testbed-node-3] 2026-01-17 00:36:50.153659 | orchestrator | ok: [testbed-node-4] 2026-01-17 00:36:50.153670 | orchestrator | ok: [testbed-node-5] 2026-01-17 00:36:50.153681 | orchestrator | 2026-01-17 00:36:50.153691 | orchestrator | TASK [osism.commons.network : Remove ifupdown package] ************************* 2026-01-17 00:36:50.153702 | orchestrator | Saturday 17 January 2026 00:36:28 +0000 (0:00:02.006) 0:00:04.274 ****** 2026-01-17 00:36:50.153713 | orchestrator | ok: [testbed-manager] 2026-01-17 00:36:50.153724 | orchestrator | ok: [testbed-node-0] 2026-01-17 00:36:50.153735 | orchestrator | ok: [testbed-node-1] 2026-01-17 00:36:50.153745 | orchestrator | ok: [testbed-node-2] 2026-01-17 00:36:50.153756 | orchestrator | ok: [testbed-node-3] 2026-01-17 00:36:50.153767 | orchestrator | ok: [testbed-node-4] 2026-01-17 00:36:50.153779 | orchestrator | ok: [testbed-node-5] 2026-01-17 00:36:50.153791 | orchestrator | 2026-01-17 00:36:50.153804 | orchestrator | TASK [osism.commons.network : Create required directories] ********************* 2026-01-17 00:36:50.153843 | orchestrator | Saturday 17 January 2026 00:36:30 +0000 (0:00:01.901) 0:00:06.175 ****** 2026-01-17 00:36:50.153856 | orchestrator | ok: [testbed-node-0] => (item=/etc/netplan) 2026-01-17 00:36:50.153869 | orchestrator | ok: [testbed-manager] => (item=/etc/netplan) 2026-01-17 00:36:50.153881 | orchestrator | ok: [testbed-node-1] => (item=/etc/netplan) 2026-01-17 00:36:50.153893 | orchestrator | ok: [testbed-node-2] => (item=/etc/netplan) 2026-01-17 00:36:50.153906 | orchestrator | ok: [testbed-node-3] => (item=/etc/netplan) 2026-01-17 00:36:50.153918 | orchestrator | ok: [testbed-node-4] => (item=/etc/netplan) 2026-01-17 00:36:50.153930 | orchestrator | ok: [testbed-node-5] => (item=/etc/netplan) 2026-01-17 00:36:50.153943 | orchestrator | 2026-01-17 00:36:50.153955 | orchestrator | TASK [osism.commons.network : Prepare netplan configuration template] ********** 2026-01-17 00:36:50.153967 | orchestrator | Saturday 17 January 2026 00:36:31 +0000 (0:00:00.996) 0:00:07.172 ****** 2026-01-17 00:36:50.153979 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-01-17 00:36:50.153992 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-01-17 00:36:50.154004 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-01-17 00:36:50.154073 | orchestrator | ok: [testbed-manager -> localhost] 2026-01-17 00:36:50.154087 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-01-17 00:36:50.154098 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-01-17 00:36:50.154110 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-01-17 00:36:50.154122 | orchestrator | 2026-01-17 00:36:50.154135 | orchestrator | TASK [osism.commons.network : Copy netplan configuration] ********************** 2026-01-17 00:36:50.154147 | orchestrator | Saturday 17 January 2026 00:36:34 +0000 (0:00:03.471) 0:00:10.643 ****** 2026-01-17 00:36:50.154159 | orchestrator | changed: [testbed-manager] 2026-01-17 00:36:50.154200 | orchestrator | changed: [testbed-node-0] 2026-01-17 00:36:50.154212 | orchestrator | changed: [testbed-node-1] 2026-01-17 00:36:50.154223 | orchestrator | changed: [testbed-node-2] 2026-01-17 00:36:50.154234 | orchestrator | changed: [testbed-node-3] 2026-01-17 00:36:50.154245 | orchestrator | changed: [testbed-node-4] 2026-01-17 00:36:50.154255 | orchestrator | changed: [testbed-node-5] 2026-01-17 00:36:50.154266 | orchestrator | 2026-01-17 00:36:50.154276 | orchestrator | TASK [osism.commons.network : Remove netplan configuration template] *********** 2026-01-17 00:36:50.154303 | orchestrator | Saturday 17 January 2026 00:36:36 +0000 (0:00:01.767) 0:00:12.411 ****** 2026-01-17 00:36:50.154314 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-01-17 00:36:50.154325 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-01-17 00:36:50.154335 | orchestrator | ok: [testbed-manager -> localhost] 2026-01-17 00:36:50.154346 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-01-17 00:36:50.154356 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-01-17 00:36:50.154367 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-01-17 00:36:50.154377 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-01-17 00:36:50.154388 | orchestrator | 2026-01-17 00:36:50.154399 | orchestrator | TASK [osism.commons.network : Check if path for interface file exists] ********* 2026-01-17 00:36:50.154409 | orchestrator | Saturday 17 January 2026 00:36:38 +0000 (0:00:01.781) 0:00:14.193 ****** 2026-01-17 00:36:50.154420 | orchestrator | ok: [testbed-manager] 2026-01-17 00:36:50.154431 | orchestrator | ok: [testbed-node-0] 2026-01-17 00:36:50.154441 | orchestrator | ok: [testbed-node-1] 2026-01-17 00:36:50.154452 | orchestrator | ok: [testbed-node-2] 2026-01-17 00:36:50.154462 | orchestrator | ok: [testbed-node-3] 2026-01-17 00:36:50.154473 | orchestrator | ok: [testbed-node-4] 2026-01-17 00:36:50.154483 | orchestrator | ok: [testbed-node-5] 2026-01-17 00:36:50.154494 | orchestrator | 2026-01-17 00:36:50.154505 | orchestrator | TASK [osism.commons.network : Copy interfaces file] **************************** 2026-01-17 00:36:50.154532 | orchestrator | Saturday 17 January 2026 00:36:39 +0000 (0:00:01.188) 0:00:15.381 ****** 2026-01-17 00:36:50.154544 | orchestrator | skipping: [testbed-manager] 2026-01-17 00:36:50.154555 | orchestrator | skipping: [testbed-node-0] 2026-01-17 00:36:50.154566 | orchestrator | skipping: [testbed-node-1] 2026-01-17 00:36:50.154586 | orchestrator | skipping: [testbed-node-2] 2026-01-17 00:36:50.154597 | orchestrator | skipping: [testbed-node-3] 2026-01-17 00:36:50.154608 | orchestrator | skipping: [testbed-node-4] 2026-01-17 00:36:50.154618 | orchestrator | skipping: [testbed-node-5] 2026-01-17 00:36:50.154629 | orchestrator | 2026-01-17 00:36:50.154640 | orchestrator | TASK [osism.commons.network : Install package networkd-dispatcher] ************* 2026-01-17 00:36:50.154651 | orchestrator | Saturday 17 January 2026 00:36:40 +0000 (0:00:00.699) 0:00:16.081 ****** 2026-01-17 00:36:50.154661 | orchestrator | ok: [testbed-manager] 2026-01-17 00:36:50.154672 | orchestrator | ok: [testbed-node-0] 2026-01-17 00:36:50.154683 | orchestrator | ok: [testbed-node-1] 2026-01-17 00:36:50.154693 | orchestrator | ok: [testbed-node-2] 2026-01-17 00:36:50.154704 | orchestrator | ok: [testbed-node-3] 2026-01-17 00:36:50.154714 | orchestrator | ok: [testbed-node-4] 2026-01-17 00:36:50.154725 | orchestrator | ok: [testbed-node-5] 2026-01-17 00:36:50.154735 | orchestrator | 2026-01-17 00:36:50.154746 | orchestrator | TASK [osism.commons.network : Copy dispatcher scripts] ************************* 2026-01-17 00:36:50.154757 | orchestrator | Saturday 17 January 2026 00:36:42 +0000 (0:00:02.297) 0:00:18.378 ****** 2026-01-17 00:36:50.154768 | orchestrator | skipping: [testbed-node-0] 2026-01-17 00:36:50.154779 | orchestrator | skipping: [testbed-node-1] 2026-01-17 00:36:50.154789 | orchestrator | skipping: [testbed-node-2] 2026-01-17 00:36:50.154800 | orchestrator | skipping: [testbed-node-3] 2026-01-17 00:36:50.154810 | orchestrator | skipping: [testbed-node-4] 2026-01-17 00:36:50.154821 | orchestrator | skipping: [testbed-node-5] 2026-01-17 00:36:50.154832 | orchestrator | changed: [testbed-manager] => (item={'dest': 'routable.d/iptables.sh', 'src': '/opt/configuration/network/iptables.sh'}) 2026-01-17 00:36:50.154844 | orchestrator | 2026-01-17 00:36:50.154855 | orchestrator | TASK [osism.commons.network : Manage service networkd-dispatcher] ************** 2026-01-17 00:36:50.154866 | orchestrator | Saturday 17 January 2026 00:36:43 +0000 (0:00:00.956) 0:00:19.335 ****** 2026-01-17 00:36:50.154877 | orchestrator | ok: [testbed-manager] 2026-01-17 00:36:50.154887 | orchestrator | changed: [testbed-node-1] 2026-01-17 00:36:50.154898 | orchestrator | changed: [testbed-node-0] 2026-01-17 00:36:50.154909 | orchestrator | changed: [testbed-node-2] 2026-01-17 00:36:50.154919 | orchestrator | changed: [testbed-node-3] 2026-01-17 00:36:50.154930 | orchestrator | changed: [testbed-node-4] 2026-01-17 00:36:50.154941 | orchestrator | changed: [testbed-node-5] 2026-01-17 00:36:50.154951 | orchestrator | 2026-01-17 00:36:50.154962 | orchestrator | TASK [osism.commons.network : Include cleanup tasks] *************************** 2026-01-17 00:36:50.154973 | orchestrator | Saturday 17 January 2026 00:36:45 +0000 (0:00:01.720) 0:00:21.056 ****** 2026-01-17 00:36:50.154984 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/cleanup-netplan.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-01-17 00:36:50.154996 | orchestrator | 2026-01-17 00:36:50.155007 | orchestrator | TASK [osism.commons.network : List existing configuration files] *************** 2026-01-17 00:36:50.155017 | orchestrator | Saturday 17 January 2026 00:36:46 +0000 (0:00:01.278) 0:00:22.334 ****** 2026-01-17 00:36:50.155028 | orchestrator | ok: [testbed-manager] 2026-01-17 00:36:50.155039 | orchestrator | ok: [testbed-node-1] 2026-01-17 00:36:50.155049 | orchestrator | ok: [testbed-node-2] 2026-01-17 00:36:50.155060 | orchestrator | ok: [testbed-node-3] 2026-01-17 00:36:50.155071 | orchestrator | ok: [testbed-node-4] 2026-01-17 00:36:50.155081 | orchestrator | ok: [testbed-node-5] 2026-01-17 00:36:50.155092 | orchestrator | ok: [testbed-node-0] 2026-01-17 00:36:50.155102 | orchestrator | 2026-01-17 00:36:50.155113 | orchestrator | TASK [osism.commons.network : Set network_configured_files fact] *************** 2026-01-17 00:36:50.155124 | orchestrator | Saturday 17 January 2026 00:36:48 +0000 (0:00:01.759) 0:00:24.094 ****** 2026-01-17 00:36:50.155135 | orchestrator | ok: [testbed-manager] 2026-01-17 00:36:50.155145 | orchestrator | ok: [testbed-node-0] 2026-01-17 00:36:50.155156 | orchestrator | ok: [testbed-node-1] 2026-01-17 00:36:50.155192 | orchestrator | ok: [testbed-node-2] 2026-01-17 00:36:50.155203 | orchestrator | ok: [testbed-node-3] 2026-01-17 00:36:50.155214 | orchestrator | ok: [testbed-node-4] 2026-01-17 00:36:50.155224 | orchestrator | ok: [testbed-node-5] 2026-01-17 00:36:50.155235 | orchestrator | 2026-01-17 00:36:50.155246 | orchestrator | TASK [osism.commons.network : Remove unused configuration files] *************** 2026-01-17 00:36:50.155257 | orchestrator | Saturday 17 January 2026 00:36:48 +0000 (0:00:00.649) 0:00:24.743 ****** 2026-01-17 00:36:50.155268 | orchestrator | skipping: [testbed-manager] => (item=/etc/netplan/01-osism.yaml)  2026-01-17 00:36:50.155278 | orchestrator | skipping: [testbed-node-0] => (item=/etc/netplan/01-osism.yaml)  2026-01-17 00:36:50.155289 | orchestrator | skipping: [testbed-node-1] => (item=/etc/netplan/01-osism.yaml)  2026-01-17 00:36:50.155300 | orchestrator | skipping: [testbed-node-2] => (item=/etc/netplan/01-osism.yaml)  2026-01-17 00:36:50.155311 | orchestrator | changed: [testbed-manager] => (item=/etc/netplan/50-cloud-init.yaml) 2026-01-17 00:36:50.155321 | orchestrator | skipping: [testbed-node-3] => (item=/etc/netplan/01-osism.yaml)  2026-01-17 00:36:50.155332 | orchestrator | changed: [testbed-node-0] => (item=/etc/netplan/50-cloud-init.yaml) 2026-01-17 00:36:50.155343 | orchestrator | skipping: [testbed-node-4] => (item=/etc/netplan/01-osism.yaml)  2026-01-17 00:36:50.155353 | orchestrator | changed: [testbed-node-1] => (item=/etc/netplan/50-cloud-init.yaml) 2026-01-17 00:36:50.155364 | orchestrator | changed: [testbed-node-2] => (item=/etc/netplan/50-cloud-init.yaml) 2026-01-17 00:36:50.155375 | orchestrator | changed: [testbed-node-3] => (item=/etc/netplan/50-cloud-init.yaml) 2026-01-17 00:36:50.155385 | orchestrator | skipping: [testbed-node-5] => (item=/etc/netplan/01-osism.yaml)  2026-01-17 00:36:50.155404 | orchestrator | changed: [testbed-node-4] => (item=/etc/netplan/50-cloud-init.yaml) 2026-01-17 00:36:50.155415 | orchestrator | changed: [testbed-node-5] => (item=/etc/netplan/50-cloud-init.yaml) 2026-01-17 00:36:50.155426 | orchestrator | 2026-01-17 00:36:50.155444 | orchestrator | TASK [osism.commons.network : Include dummy interfaces] ************************ 2026-01-17 00:37:06.621849 | orchestrator | Saturday 17 January 2026 00:36:50 +0000 (0:00:01.293) 0:00:26.036 ****** 2026-01-17 00:37:06.621967 | orchestrator | skipping: [testbed-manager] 2026-01-17 00:37:06.621994 | orchestrator | skipping: [testbed-node-0] 2026-01-17 00:37:06.622107 | orchestrator | skipping: [testbed-node-1] 2026-01-17 00:37:06.622128 | orchestrator | skipping: [testbed-node-2] 2026-01-17 00:37:06.622148 | orchestrator | skipping: [testbed-node-3] 2026-01-17 00:37:06.622194 | orchestrator | skipping: [testbed-node-4] 2026-01-17 00:37:06.622213 | orchestrator | skipping: [testbed-node-5] 2026-01-17 00:37:06.622232 | orchestrator | 2026-01-17 00:37:06.622252 | orchestrator | TASK [osism.commons.network : Include vxlan interfaces] ************************ 2026-01-17 00:37:06.622270 | orchestrator | Saturday 17 January 2026 00:36:50 +0000 (0:00:00.663) 0:00:26.699 ****** 2026-01-17 00:37:06.622290 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/vxlan-interfaces.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-3, testbed-node-4, testbed-node-2, testbed-node-5 2026-01-17 00:37:06.622310 | orchestrator | 2026-01-17 00:37:06.622327 | orchestrator | TASK [osism.commons.network : Create systemd networkd netdev files] ************ 2026-01-17 00:37:06.622344 | orchestrator | Saturday 17 January 2026 00:36:55 +0000 (0:00:04.541) 0:00:31.241 ****** 2026-01-17 00:37:06.622366 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 42}}) 2026-01-17 00:37:06.622384 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan0', 'value': {'addresses': ['192.168.112.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 42}}) 2026-01-17 00:37:06.622441 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 42}}) 2026-01-17 00:37:06.622464 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 42}}) 2026-01-17 00:37:06.622483 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 42}}) 2026-01-17 00:37:06.622502 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 42}}) 2026-01-17 00:37:06.622521 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 23}}) 2026-01-17 00:37:06.622556 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.11/20'], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 23}}) 2026-01-17 00:37:06.622589 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 42}}) 2026-01-17 00:37:06.622608 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.10/20'], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 23}}) 2026-01-17 00:37:06.622664 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.15/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 23}}) 2026-01-17 00:37:06.622711 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.13/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 23}}) 2026-01-17 00:37:06.622733 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.12/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 23}}) 2026-01-17 00:37:06.622751 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.14/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 23}}) 2026-01-17 00:37:06.622769 | orchestrator | 2026-01-17 00:37:06.622787 | orchestrator | TASK [osism.commons.network : Create systemd networkd network files] *********** 2026-01-17 00:37:06.622805 | orchestrator | Saturday 17 January 2026 00:37:00 +0000 (0:00:05.487) 0:00:36.728 ****** 2026-01-17 00:37:06.622823 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 42}}) 2026-01-17 00:37:06.622858 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 42}}) 2026-01-17 00:37:06.622876 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan0', 'value': {'addresses': ['192.168.112.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 42}}) 2026-01-17 00:37:06.622895 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 42}}) 2026-01-17 00:37:06.622915 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.11/20'], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 23}}) 2026-01-17 00:37:06.622933 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 42}}) 2026-01-17 00:37:06.622953 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 42}}) 2026-01-17 00:37:06.622969 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 42}}) 2026-01-17 00:37:06.622987 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.10/20'], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 23}}) 2026-01-17 00:37:06.623014 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 23}}) 2026-01-17 00:37:06.623033 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.15/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 23}}) 2026-01-17 00:37:06.623050 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.13/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 23}}) 2026-01-17 00:37:06.623090 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.14/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 23}}) 2026-01-17 00:37:20.327147 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.12/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 23}}) 2026-01-17 00:37:20.327244 | orchestrator | 2026-01-17 00:37:20.327254 | orchestrator | TASK [osism.commons.network : Include networkd cleanup tasks] ****************** 2026-01-17 00:37:20.327260 | orchestrator | Saturday 17 January 2026 00:37:06 +0000 (0:00:05.770) 0:00:42.499 ****** 2026-01-17 00:37:20.327284 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/cleanup-networkd.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-01-17 00:37:20.327290 | orchestrator | 2026-01-17 00:37:20.327294 | orchestrator | TASK [osism.commons.network : List existing configuration files] *************** 2026-01-17 00:37:20.327299 | orchestrator | Saturday 17 January 2026 00:37:08 +0000 (0:00:01.468) 0:00:43.967 ****** 2026-01-17 00:37:20.327304 | orchestrator | ok: [testbed-manager] 2026-01-17 00:37:20.327309 | orchestrator | ok: [testbed-node-0] 2026-01-17 00:37:20.327313 | orchestrator | ok: [testbed-node-1] 2026-01-17 00:37:20.327318 | orchestrator | ok: [testbed-node-2] 2026-01-17 00:37:20.327322 | orchestrator | ok: [testbed-node-3] 2026-01-17 00:37:20.327326 | orchestrator | ok: [testbed-node-4] 2026-01-17 00:37:20.327331 | orchestrator | ok: [testbed-node-5] 2026-01-17 00:37:20.327335 | orchestrator | 2026-01-17 00:37:20.327340 | orchestrator | TASK [osism.commons.network : Remove unused configuration files] *************** 2026-01-17 00:37:20.327344 | orchestrator | Saturday 17 January 2026 00:37:09 +0000 (0:00:01.065) 0:00:45.033 ****** 2026-01-17 00:37:20.327348 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan1.network)  2026-01-17 00:37:20.327353 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan0.network)  2026-01-17 00:37:20.327358 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-01-17 00:37:20.327362 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-01-17 00:37:20.327366 | orchestrator | skipping: [testbed-manager] 2026-01-17 00:37:20.327371 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan1.network)  2026-01-17 00:37:20.327376 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan0.network)  2026-01-17 00:37:20.327380 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-01-17 00:37:20.327384 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-01-17 00:37:20.327388 | orchestrator | skipping: [testbed-node-0] 2026-01-17 00:37:20.327393 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan1.network)  2026-01-17 00:37:20.327397 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan0.network)  2026-01-17 00:37:20.327401 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-01-17 00:37:20.327405 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-01-17 00:37:20.327410 | orchestrator | skipping: [testbed-node-1] 2026-01-17 00:37:20.327414 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan1.network)  2026-01-17 00:37:20.327418 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan0.network)  2026-01-17 00:37:20.327422 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-01-17 00:37:20.327427 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-01-17 00:37:20.327431 | orchestrator | skipping: [testbed-node-2] 2026-01-17 00:37:20.327435 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan1.network)  2026-01-17 00:37:20.327439 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan0.network)  2026-01-17 00:37:20.327454 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-01-17 00:37:20.327458 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-01-17 00:37:20.327462 | orchestrator | skipping: [testbed-node-3] 2026-01-17 00:37:20.327467 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan1.network)  2026-01-17 00:37:20.327471 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan0.network)  2026-01-17 00:37:20.327479 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-01-17 00:37:20.327483 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-01-17 00:37:20.327488 | orchestrator | skipping: [testbed-node-4] 2026-01-17 00:37:20.327492 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan1.network)  2026-01-17 00:37:20.327496 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan0.network)  2026-01-17 00:37:20.327501 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-01-17 00:37:20.327505 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-01-17 00:37:20.327509 | orchestrator | skipping: [testbed-node-5] 2026-01-17 00:37:20.327514 | orchestrator | 2026-01-17 00:37:20.327518 | orchestrator | TASK [osism.commons.network : Include network extra init] ********************** 2026-01-17 00:37:20.327532 | orchestrator | Saturday 17 January 2026 00:37:10 +0000 (0:00:01.051) 0:00:46.084 ****** 2026-01-17 00:37:20.327537 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/network-extra-init.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-01-17 00:37:20.327541 | orchestrator | 2026-01-17 00:37:20.327546 | orchestrator | TASK [osism.commons.network : Deploy network-extra-init script] **************** 2026-01-17 00:37:20.327550 | orchestrator | Saturday 17 January 2026 00:37:11 +0000 (0:00:01.306) 0:00:47.391 ****** 2026-01-17 00:37:20.327555 | orchestrator | skipping: [testbed-manager] 2026-01-17 00:37:20.327559 | orchestrator | skipping: [testbed-node-0] 2026-01-17 00:37:20.327563 | orchestrator | skipping: [testbed-node-1] 2026-01-17 00:37:20.327567 | orchestrator | skipping: [testbed-node-2] 2026-01-17 00:37:20.327572 | orchestrator | skipping: [testbed-node-3] 2026-01-17 00:37:20.327576 | orchestrator | skipping: [testbed-node-4] 2026-01-17 00:37:20.327580 | orchestrator | skipping: [testbed-node-5] 2026-01-17 00:37:20.327584 | orchestrator | 2026-01-17 00:37:20.327589 | orchestrator | TASK [osism.commons.network : Deploy network-extra-init systemd service] ******* 2026-01-17 00:37:20.327593 | orchestrator | Saturday 17 January 2026 00:37:12 +0000 (0:00:00.674) 0:00:48.066 ****** 2026-01-17 00:37:20.327597 | orchestrator | skipping: [testbed-manager] 2026-01-17 00:37:20.327601 | orchestrator | skipping: [testbed-node-0] 2026-01-17 00:37:20.327606 | orchestrator | skipping: [testbed-node-1] 2026-01-17 00:37:20.327610 | orchestrator | skipping: [testbed-node-2] 2026-01-17 00:37:20.327614 | orchestrator | skipping: [testbed-node-3] 2026-01-17 00:37:20.327618 | orchestrator | skipping: [testbed-node-4] 2026-01-17 00:37:20.327622 | orchestrator | skipping: [testbed-node-5] 2026-01-17 00:37:20.327627 | orchestrator | 2026-01-17 00:37:20.327631 | orchestrator | TASK [osism.commons.network : Enable and start network-extra-init service] ***** 2026-01-17 00:37:20.327635 | orchestrator | Saturday 17 January 2026 00:37:12 +0000 (0:00:00.830) 0:00:48.896 ****** 2026-01-17 00:37:20.327640 | orchestrator | skipping: [testbed-manager] 2026-01-17 00:37:20.327644 | orchestrator | skipping: [testbed-node-0] 2026-01-17 00:37:20.327648 | orchestrator | skipping: [testbed-node-1] 2026-01-17 00:37:20.327652 | orchestrator | skipping: [testbed-node-2] 2026-01-17 00:37:20.327657 | orchestrator | skipping: [testbed-node-3] 2026-01-17 00:37:20.327661 | orchestrator | skipping: [testbed-node-4] 2026-01-17 00:37:20.327665 | orchestrator | skipping: [testbed-node-5] 2026-01-17 00:37:20.327669 | orchestrator | 2026-01-17 00:37:20.327674 | orchestrator | TASK [osism.commons.network : Disable and stop network-extra-init service] ***** 2026-01-17 00:37:20.327678 | orchestrator | Saturday 17 January 2026 00:37:13 +0000 (0:00:00.653) 0:00:49.549 ****** 2026-01-17 00:37:20.327682 | orchestrator | ok: [testbed-node-1] 2026-01-17 00:37:20.327686 | orchestrator | ok: [testbed-manager] 2026-01-17 00:37:20.327691 | orchestrator | ok: [testbed-node-0] 2026-01-17 00:37:20.327695 | orchestrator | ok: [testbed-node-3] 2026-01-17 00:37:20.327699 | orchestrator | ok: [testbed-node-2] 2026-01-17 00:37:20.327707 | orchestrator | ok: [testbed-node-4] 2026-01-17 00:37:20.327711 | orchestrator | ok: [testbed-node-5] 2026-01-17 00:37:20.327715 | orchestrator | 2026-01-17 00:37:20.327720 | orchestrator | TASK [osism.commons.network : Remove network-extra-init systemd service] ******* 2026-01-17 00:37:20.327725 | orchestrator | Saturday 17 January 2026 00:37:15 +0000 (0:00:01.863) 0:00:51.412 ****** 2026-01-17 00:37:20.327730 | orchestrator | ok: [testbed-manager] 2026-01-17 00:37:20.327734 | orchestrator | ok: [testbed-node-0] 2026-01-17 00:37:20.327739 | orchestrator | ok: [testbed-node-1] 2026-01-17 00:37:20.327744 | orchestrator | ok: [testbed-node-2] 2026-01-17 00:37:20.327749 | orchestrator | ok: [testbed-node-3] 2026-01-17 00:37:20.327754 | orchestrator | ok: [testbed-node-4] 2026-01-17 00:37:20.327759 | orchestrator | ok: [testbed-node-5] 2026-01-17 00:37:20.327764 | orchestrator | 2026-01-17 00:37:20.327768 | orchestrator | TASK [osism.commons.network : Remove network-extra-init script] **************** 2026-01-17 00:37:20.327773 | orchestrator | Saturday 17 January 2026 00:37:16 +0000 (0:00:01.005) 0:00:52.417 ****** 2026-01-17 00:37:20.327778 | orchestrator | ok: [testbed-manager] 2026-01-17 00:37:20.327783 | orchestrator | ok: [testbed-node-0] 2026-01-17 00:37:20.327787 | orchestrator | ok: [testbed-node-1] 2026-01-17 00:37:20.327792 | orchestrator | ok: [testbed-node-2] 2026-01-17 00:37:20.327797 | orchestrator | ok: [testbed-node-4] 2026-01-17 00:37:20.327802 | orchestrator | ok: [testbed-node-3] 2026-01-17 00:37:20.327806 | orchestrator | ok: [testbed-node-5] 2026-01-17 00:37:20.327811 | orchestrator | 2026-01-17 00:37:20.327816 | orchestrator | RUNNING HANDLER [osism.commons.network : Reload systemd-networkd] ************** 2026-01-17 00:37:20.327821 | orchestrator | Saturday 17 January 2026 00:37:18 +0000 (0:00:02.389) 0:00:54.807 ****** 2026-01-17 00:37:20.327826 | orchestrator | skipping: [testbed-manager] 2026-01-17 00:37:20.327830 | orchestrator | skipping: [testbed-node-0] 2026-01-17 00:37:20.327839 | orchestrator | skipping: [testbed-node-1] 2026-01-17 00:37:20.327844 | orchestrator | skipping: [testbed-node-2] 2026-01-17 00:37:20.327848 | orchestrator | skipping: [testbed-node-3] 2026-01-17 00:37:20.327853 | orchestrator | skipping: [testbed-node-4] 2026-01-17 00:37:20.327858 | orchestrator | skipping: [testbed-node-5] 2026-01-17 00:37:20.327863 | orchestrator | 2026-01-17 00:37:20.327868 | orchestrator | RUNNING HANDLER [osism.commons.network : Netplan configuration changed] ******** 2026-01-17 00:37:20.327873 | orchestrator | Saturday 17 January 2026 00:37:19 +0000 (0:00:00.848) 0:00:55.655 ****** 2026-01-17 00:37:20.327877 | orchestrator | skipping: [testbed-manager] 2026-01-17 00:37:20.327882 | orchestrator | skipping: [testbed-node-0] 2026-01-17 00:37:20.327887 | orchestrator | skipping: [testbed-node-1] 2026-01-17 00:37:20.327892 | orchestrator | skipping: [testbed-node-2] 2026-01-17 00:37:20.327896 | orchestrator | skipping: [testbed-node-3] 2026-01-17 00:37:20.327901 | orchestrator | skipping: [testbed-node-4] 2026-01-17 00:37:20.327906 | orchestrator | skipping: [testbed-node-5] 2026-01-17 00:37:20.327911 | orchestrator | 2026-01-17 00:37:20.327916 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-17 00:37:20.327921 | orchestrator | testbed-manager : ok=25  changed=5  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2026-01-17 00:37:20.327928 | orchestrator | testbed-node-0 : ok=24  changed=5  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-01-17 00:37:20.327936 | orchestrator | testbed-node-1 : ok=24  changed=5  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-01-17 00:37:20.775863 | orchestrator | testbed-node-2 : ok=24  changed=5  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-01-17 00:37:20.775962 | orchestrator | testbed-node-3 : ok=24  changed=5  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-01-17 00:37:20.775976 | orchestrator | testbed-node-4 : ok=24  changed=5  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-01-17 00:37:20.776011 | orchestrator | testbed-node-5 : ok=24  changed=5  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-01-17 00:37:20.776023 | orchestrator | 2026-01-17 00:37:20.776035 | orchestrator | 2026-01-17 00:37:20.776046 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-17 00:37:20.776059 | orchestrator | Saturday 17 January 2026 00:37:20 +0000 (0:00:00.560) 0:00:56.216 ****** 2026-01-17 00:37:20.776069 | orchestrator | =============================================================================== 2026-01-17 00:37:20.776080 | orchestrator | osism.commons.network : Create systemd networkd network files ----------- 5.77s 2026-01-17 00:37:20.776091 | orchestrator | osism.commons.network : Create systemd networkd netdev files ------------ 5.49s 2026-01-17 00:37:20.776102 | orchestrator | osism.commons.network : Include vxlan interfaces ------------------------ 4.54s 2026-01-17 00:37:20.776112 | orchestrator | osism.commons.network : Prepare netplan configuration template ---------- 3.47s 2026-01-17 00:37:20.776123 | orchestrator | osism.commons.network : Remove network-extra-init script ---------------- 2.39s 2026-01-17 00:37:20.776133 | orchestrator | osism.commons.network : Install package networkd-dispatcher ------------- 2.30s 2026-01-17 00:37:20.776144 | orchestrator | osism.commons.network : Install required packages ----------------------- 2.01s 2026-01-17 00:37:20.776155 | orchestrator | osism.commons.network : Remove ifupdown package ------------------------- 1.90s 2026-01-17 00:37:20.776165 | orchestrator | osism.commons.network : Disable and stop network-extra-init service ----- 1.86s 2026-01-17 00:37:20.776176 | orchestrator | osism.commons.network : Remove netplan configuration template ----------- 1.78s 2026-01-17 00:37:20.776187 | orchestrator | osism.commons.network : Copy netplan configuration ---------------------- 1.77s 2026-01-17 00:37:20.776197 | orchestrator | osism.commons.network : List existing configuration files --------------- 1.76s 2026-01-17 00:37:20.776275 | orchestrator | osism.commons.network : Manage service networkd-dispatcher -------------- 1.72s 2026-01-17 00:37:20.776287 | orchestrator | osism.commons.network : Include networkd cleanup tasks ------------------ 1.47s 2026-01-17 00:37:20.776298 | orchestrator | osism.commons.network : Include network extra init ---------------------- 1.31s 2026-01-17 00:37:20.776309 | orchestrator | osism.commons.network : Remove unused configuration files --------------- 1.29s 2026-01-17 00:37:20.776319 | orchestrator | osism.commons.network : Include cleanup tasks --------------------------- 1.28s 2026-01-17 00:37:20.776330 | orchestrator | osism.commons.network : Include type specific tasks --------------------- 1.24s 2026-01-17 00:37:20.776340 | orchestrator | osism.commons.network : Check if path for interface file exists --------- 1.19s 2026-01-17 00:37:20.776351 | orchestrator | osism.commons.network : List existing configuration files --------------- 1.07s 2026-01-17 00:37:21.108098 | orchestrator | + osism apply wireguard 2026-01-17 00:37:33.242377 | orchestrator | 2026-01-17 00:37:33 | INFO  | Task 7313cf76-4f02-493c-9185-72951fd6bb11 (wireguard) was prepared for execution. 2026-01-17 00:37:33.242506 | orchestrator | 2026-01-17 00:37:33 | INFO  | It takes a moment until task 7313cf76-4f02-493c-9185-72951fd6bb11 (wireguard) has been started and output is visible here. 2026-01-17 00:37:54.601721 | orchestrator | 2026-01-17 00:37:54.601842 | orchestrator | PLAY [Apply role wireguard] **************************************************** 2026-01-17 00:37:54.601856 | orchestrator | 2026-01-17 00:37:54.601880 | orchestrator | TASK [osism.services.wireguard : Install iptables package] ********************* 2026-01-17 00:37:54.601888 | orchestrator | Saturday 17 January 2026 00:37:37 +0000 (0:00:00.253) 0:00:00.253 ****** 2026-01-17 00:37:54.601896 | orchestrator | ok: [testbed-manager] 2026-01-17 00:37:54.601905 | orchestrator | 2026-01-17 00:37:54.601916 | orchestrator | TASK [osism.services.wireguard : Install wireguard package] ******************** 2026-01-17 00:37:54.601924 | orchestrator | Saturday 17 January 2026 00:37:39 +0000 (0:00:01.747) 0:00:02.001 ****** 2026-01-17 00:37:54.601931 | orchestrator | changed: [testbed-manager] 2026-01-17 00:37:54.601958 | orchestrator | 2026-01-17 00:37:54.601966 | orchestrator | TASK [osism.services.wireguard : Create public and private key - server] ******* 2026-01-17 00:37:54.601973 | orchestrator | Saturday 17 January 2026 00:37:46 +0000 (0:00:07.347) 0:00:09.348 ****** 2026-01-17 00:37:54.601981 | orchestrator | changed: [testbed-manager] 2026-01-17 00:37:54.601988 | orchestrator | 2026-01-17 00:37:54.601995 | orchestrator | TASK [osism.services.wireguard : Create preshared key] ************************* 2026-01-17 00:37:54.602002 | orchestrator | Saturday 17 January 2026 00:37:47 +0000 (0:00:00.573) 0:00:09.922 ****** 2026-01-17 00:37:54.602012 | orchestrator | changed: [testbed-manager] 2026-01-17 00:37:54.602086 | orchestrator | 2026-01-17 00:37:54.602098 | orchestrator | TASK [osism.services.wireguard : Get preshared key] **************************** 2026-01-17 00:37:54.602111 | orchestrator | Saturday 17 January 2026 00:37:47 +0000 (0:00:00.445) 0:00:10.368 ****** 2026-01-17 00:37:54.602124 | orchestrator | ok: [testbed-manager] 2026-01-17 00:37:54.602136 | orchestrator | 2026-01-17 00:37:54.602149 | orchestrator | TASK [osism.services.wireguard : Get public key - server] ********************** 2026-01-17 00:37:54.602158 | orchestrator | Saturday 17 January 2026 00:37:48 +0000 (0:00:00.680) 0:00:11.048 ****** 2026-01-17 00:37:54.602165 | orchestrator | ok: [testbed-manager] 2026-01-17 00:37:54.602172 | orchestrator | 2026-01-17 00:37:54.602179 | orchestrator | TASK [osism.services.wireguard : Get private key - server] ********************* 2026-01-17 00:37:54.602186 | orchestrator | Saturday 17 January 2026 00:37:48 +0000 (0:00:00.430) 0:00:11.478 ****** 2026-01-17 00:37:54.602193 | orchestrator | ok: [testbed-manager] 2026-01-17 00:37:54.602200 | orchestrator | 2026-01-17 00:37:54.602207 | orchestrator | TASK [osism.services.wireguard : Copy wg0.conf configuration file] ************* 2026-01-17 00:37:54.602214 | orchestrator | Saturday 17 January 2026 00:37:49 +0000 (0:00:00.458) 0:00:11.937 ****** 2026-01-17 00:37:54.602222 | orchestrator | changed: [testbed-manager] 2026-01-17 00:37:54.602229 | orchestrator | 2026-01-17 00:37:54.602236 | orchestrator | TASK [osism.services.wireguard : Copy client configuration files] ************** 2026-01-17 00:37:54.602243 | orchestrator | Saturday 17 January 2026 00:37:50 +0000 (0:00:01.180) 0:00:13.117 ****** 2026-01-17 00:37:54.602250 | orchestrator | changed: [testbed-manager] => (item=None) 2026-01-17 00:37:54.602257 | orchestrator | changed: [testbed-manager] 2026-01-17 00:37:54.602265 | orchestrator | 2026-01-17 00:37:54.602272 | orchestrator | TASK [osism.services.wireguard : Manage wg-quick@wg0.service service] ********** 2026-01-17 00:37:54.602282 | orchestrator | Saturday 17 January 2026 00:37:51 +0000 (0:00:00.917) 0:00:14.034 ****** 2026-01-17 00:37:54.602289 | orchestrator | changed: [testbed-manager] 2026-01-17 00:37:54.602298 | orchestrator | 2026-01-17 00:37:54.602333 | orchestrator | RUNNING HANDLER [osism.services.wireguard : Restart wg0 service] *************** 2026-01-17 00:37:54.602345 | orchestrator | Saturday 17 January 2026 00:37:53 +0000 (0:00:01.733) 0:00:15.768 ****** 2026-01-17 00:37:54.602353 | orchestrator | changed: [testbed-manager] 2026-01-17 00:37:54.602362 | orchestrator | 2026-01-17 00:37:54.602369 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-17 00:37:54.602378 | orchestrator | testbed-manager : ok=11  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-17 00:37:54.602387 | orchestrator | 2026-01-17 00:37:54.602395 | orchestrator | 2026-01-17 00:37:54.602403 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-17 00:37:54.602411 | orchestrator | Saturday 17 January 2026 00:37:54 +0000 (0:00:00.978) 0:00:16.746 ****** 2026-01-17 00:37:54.602420 | orchestrator | =============================================================================== 2026-01-17 00:37:54.602428 | orchestrator | osism.services.wireguard : Install wireguard package -------------------- 7.35s 2026-01-17 00:37:54.602436 | orchestrator | osism.services.wireguard : Install iptables package --------------------- 1.75s 2026-01-17 00:37:54.602444 | orchestrator | osism.services.wireguard : Manage wg-quick@wg0.service service ---------- 1.73s 2026-01-17 00:37:54.602452 | orchestrator | osism.services.wireguard : Copy wg0.conf configuration file ------------- 1.18s 2026-01-17 00:37:54.602461 | orchestrator | osism.services.wireguard : Restart wg0 service -------------------------- 0.98s 2026-01-17 00:37:54.602477 | orchestrator | osism.services.wireguard : Copy client configuration files -------------- 0.92s 2026-01-17 00:37:54.602484 | orchestrator | osism.services.wireguard : Get preshared key ---------------------------- 0.68s 2026-01-17 00:37:54.602491 | orchestrator | osism.services.wireguard : Create public and private key - server ------- 0.57s 2026-01-17 00:37:54.602498 | orchestrator | osism.services.wireguard : Get private key - server --------------------- 0.46s 2026-01-17 00:37:54.602505 | orchestrator | osism.services.wireguard : Create preshared key ------------------------- 0.45s 2026-01-17 00:37:54.602512 | orchestrator | osism.services.wireguard : Get public key - server ---------------------- 0.43s 2026-01-17 00:37:54.946496 | orchestrator | + sh -c /opt/configuration/scripts/prepare-wireguard-configuration.sh 2026-01-17 00:37:54.986497 | orchestrator | % Total % Received % Xferd Average Speed Time Time Time Current 2026-01-17 00:37:54.986566 | orchestrator | Dload Upload Total Spent Left Speed 2026-01-17 00:37:55.062108 | orchestrator | 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 14 100 14 0 0 184 0 --:--:-- --:--:-- --:--:-- 184 100 14 100 14 0 0 184 0 --:--:-- --:--:-- --:--:-- 184 2026-01-17 00:37:55.079421 | orchestrator | + osism apply --environment custom workarounds 2026-01-17 00:37:57.099686 | orchestrator | 2026-01-17 00:37:57 | INFO  | Trying to run play workarounds in environment custom 2026-01-17 00:38:07.270124 | orchestrator | 2026-01-17 00:38:07 | INFO  | Task 19755e0a-1e29-4c8b-9779-8193d177f152 (workarounds) was prepared for execution. 2026-01-17 00:38:07.270260 | orchestrator | 2026-01-17 00:38:07 | INFO  | It takes a moment until task 19755e0a-1e29-4c8b-9779-8193d177f152 (workarounds) has been started and output is visible here. 2026-01-17 00:38:32.969844 | orchestrator | 2026-01-17 00:38:32.969954 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-17 00:38:32.969972 | orchestrator | 2026-01-17 00:38:32.969984 | orchestrator | TASK [Group hosts based on virtualization_role] ******************************** 2026-01-17 00:38:32.969996 | orchestrator | Saturday 17 January 2026 00:38:11 +0000 (0:00:00.127) 0:00:00.127 ****** 2026-01-17 00:38:32.970008 | orchestrator | changed: [testbed-node-3] => (item=virtualization_role_guest) 2026-01-17 00:38:32.970064 | orchestrator | changed: [testbed-node-4] => (item=virtualization_role_guest) 2026-01-17 00:38:32.970077 | orchestrator | changed: [testbed-node-5] => (item=virtualization_role_guest) 2026-01-17 00:38:32.970088 | orchestrator | changed: [testbed-node-0] => (item=virtualization_role_guest) 2026-01-17 00:38:32.970099 | orchestrator | changed: [testbed-node-1] => (item=virtualization_role_guest) 2026-01-17 00:38:32.970110 | orchestrator | changed: [testbed-node-2] => (item=virtualization_role_guest) 2026-01-17 00:38:32.970121 | orchestrator | changed: [testbed-manager] => (item=virtualization_role_guest) 2026-01-17 00:38:32.970132 | orchestrator | 2026-01-17 00:38:32.970142 | orchestrator | PLAY [Apply netplan configuration on the manager node] ************************* 2026-01-17 00:38:32.970153 | orchestrator | 2026-01-17 00:38:32.970164 | orchestrator | TASK [Apply netplan configuration] ********************************************* 2026-01-17 00:38:32.970174 | orchestrator | Saturday 17 January 2026 00:38:12 +0000 (0:00:00.794) 0:00:00.922 ****** 2026-01-17 00:38:32.970186 | orchestrator | ok: [testbed-manager] 2026-01-17 00:38:32.970198 | orchestrator | 2026-01-17 00:38:32.970210 | orchestrator | PLAY [Apply netplan configuration on all other nodes] ************************** 2026-01-17 00:38:32.970220 | orchestrator | 2026-01-17 00:38:32.970231 | orchestrator | TASK [Apply netplan configuration] ********************************************* 2026-01-17 00:38:32.970242 | orchestrator | Saturday 17 January 2026 00:38:14 +0000 (0:00:02.569) 0:00:03.492 ****** 2026-01-17 00:38:32.970253 | orchestrator | ok: [testbed-node-3] 2026-01-17 00:38:32.970264 | orchestrator | ok: [testbed-node-4] 2026-01-17 00:38:32.970274 | orchestrator | ok: [testbed-node-0] 2026-01-17 00:38:32.970285 | orchestrator | ok: [testbed-node-5] 2026-01-17 00:38:32.970321 | orchestrator | ok: [testbed-node-1] 2026-01-17 00:38:32.970333 | orchestrator | ok: [testbed-node-2] 2026-01-17 00:38:32.970343 | orchestrator | 2026-01-17 00:38:32.970354 | orchestrator | PLAY [Add custom CA certificates to non-manager nodes] ************************* 2026-01-17 00:38:32.970365 | orchestrator | 2026-01-17 00:38:32.970376 | orchestrator | TASK [Copy custom CA certificates] ********************************************* 2026-01-17 00:38:32.970388 | orchestrator | Saturday 17 January 2026 00:38:16 +0000 (0:00:01.794) 0:00:05.286 ****** 2026-01-17 00:38:32.970401 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-01-17 00:38:32.970447 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-01-17 00:38:32.970462 | orchestrator | changed: [testbed-node-3] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-01-17 00:38:32.970474 | orchestrator | changed: [testbed-node-4] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-01-17 00:38:32.970486 | orchestrator | changed: [testbed-node-5] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-01-17 00:38:32.970498 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-01-17 00:38:32.970510 | orchestrator | 2026-01-17 00:38:32.970523 | orchestrator | TASK [Run update-ca-certificates] ********************************************** 2026-01-17 00:38:32.970536 | orchestrator | Saturday 17 January 2026 00:38:18 +0000 (0:00:01.610) 0:00:06.897 ****** 2026-01-17 00:38:32.970547 | orchestrator | changed: [testbed-node-3] 2026-01-17 00:38:32.970559 | orchestrator | changed: [testbed-node-4] 2026-01-17 00:38:32.970572 | orchestrator | changed: [testbed-node-5] 2026-01-17 00:38:32.970584 | orchestrator | changed: [testbed-node-0] 2026-01-17 00:38:32.970596 | orchestrator | changed: [testbed-node-1] 2026-01-17 00:38:32.970608 | orchestrator | changed: [testbed-node-2] 2026-01-17 00:38:32.970620 | orchestrator | 2026-01-17 00:38:32.970633 | orchestrator | TASK [Run update-ca-trust] ***************************************************** 2026-01-17 00:38:32.970645 | orchestrator | Saturday 17 January 2026 00:38:21 +0000 (0:00:03.715) 0:00:10.613 ****** 2026-01-17 00:38:32.970657 | orchestrator | skipping: [testbed-node-3] 2026-01-17 00:38:32.970687 | orchestrator | skipping: [testbed-node-4] 2026-01-17 00:38:32.970700 | orchestrator | skipping: [testbed-node-5] 2026-01-17 00:38:32.970712 | orchestrator | skipping: [testbed-node-0] 2026-01-17 00:38:32.970725 | orchestrator | skipping: [testbed-node-1] 2026-01-17 00:38:32.970737 | orchestrator | skipping: [testbed-node-2] 2026-01-17 00:38:32.970749 | orchestrator | 2026-01-17 00:38:32.970760 | orchestrator | PLAY [Add a workaround service] ************************************************ 2026-01-17 00:38:32.970771 | orchestrator | 2026-01-17 00:38:32.970782 | orchestrator | TASK [Copy workarounds.sh scripts] ********************************************* 2026-01-17 00:38:32.970792 | orchestrator | Saturday 17 January 2026 00:38:22 +0000 (0:00:00.736) 0:00:11.349 ****** 2026-01-17 00:38:32.970803 | orchestrator | changed: [testbed-node-3] 2026-01-17 00:38:32.970813 | orchestrator | changed: [testbed-node-4] 2026-01-17 00:38:32.970826 | orchestrator | changed: [testbed-node-5] 2026-01-17 00:38:32.970844 | orchestrator | changed: [testbed-node-0] 2026-01-17 00:38:32.970861 | orchestrator | changed: [testbed-node-1] 2026-01-17 00:38:32.970876 | orchestrator | changed: [testbed-node-2] 2026-01-17 00:38:32.970891 | orchestrator | changed: [testbed-manager] 2026-01-17 00:38:32.970902 | orchestrator | 2026-01-17 00:38:32.970916 | orchestrator | TASK [Copy workarounds systemd unit file] ************************************** 2026-01-17 00:38:32.970933 | orchestrator | Saturday 17 January 2026 00:38:24 +0000 (0:00:01.614) 0:00:12.964 ****** 2026-01-17 00:38:32.970943 | orchestrator | changed: [testbed-node-3] 2026-01-17 00:38:32.970954 | orchestrator | changed: [testbed-node-4] 2026-01-17 00:38:32.970964 | orchestrator | changed: [testbed-node-5] 2026-01-17 00:38:32.970975 | orchestrator | changed: [testbed-node-0] 2026-01-17 00:38:32.970986 | orchestrator | changed: [testbed-node-1] 2026-01-17 00:38:32.971005 | orchestrator | changed: [testbed-node-2] 2026-01-17 00:38:32.971033 | orchestrator | changed: [testbed-manager] 2026-01-17 00:38:32.971044 | orchestrator | 2026-01-17 00:38:32.971055 | orchestrator | TASK [Reload systemd daemon] *************************************************** 2026-01-17 00:38:32.971066 | orchestrator | Saturday 17 January 2026 00:38:25 +0000 (0:00:01.573) 0:00:14.537 ****** 2026-01-17 00:38:32.971077 | orchestrator | ok: [testbed-node-4] 2026-01-17 00:38:32.971087 | orchestrator | ok: [testbed-node-0] 2026-01-17 00:38:32.971104 | orchestrator | ok: [testbed-node-1] 2026-01-17 00:38:32.971121 | orchestrator | ok: [testbed-node-5] 2026-01-17 00:38:32.971140 | orchestrator | ok: [testbed-node-3] 2026-01-17 00:38:32.971167 | orchestrator | ok: [testbed-node-2] 2026-01-17 00:38:32.971188 | orchestrator | ok: [testbed-manager] 2026-01-17 00:38:32.971205 | orchestrator | 2026-01-17 00:38:32.971223 | orchestrator | TASK [Enable workarounds.service (Debian)] ************************************* 2026-01-17 00:38:32.971241 | orchestrator | Saturday 17 January 2026 00:38:27 +0000 (0:00:01.600) 0:00:16.138 ****** 2026-01-17 00:38:32.971260 | orchestrator | changed: [testbed-node-3] 2026-01-17 00:38:32.971279 | orchestrator | changed: [testbed-node-4] 2026-01-17 00:38:32.971297 | orchestrator | changed: [testbed-node-5] 2026-01-17 00:38:32.971314 | orchestrator | changed: [testbed-node-0] 2026-01-17 00:38:32.971325 | orchestrator | changed: [testbed-node-1] 2026-01-17 00:38:32.971335 | orchestrator | changed: [testbed-node-2] 2026-01-17 00:38:32.971345 | orchestrator | changed: [testbed-manager] 2026-01-17 00:38:32.971356 | orchestrator | 2026-01-17 00:38:32.971367 | orchestrator | TASK [Enable and start workarounds.service (RedHat)] *************************** 2026-01-17 00:38:32.971377 | orchestrator | Saturday 17 January 2026 00:38:29 +0000 (0:00:01.851) 0:00:17.990 ****** 2026-01-17 00:38:32.971388 | orchestrator | skipping: [testbed-node-3] 2026-01-17 00:38:32.971399 | orchestrator | skipping: [testbed-node-4] 2026-01-17 00:38:32.971409 | orchestrator | skipping: [testbed-node-5] 2026-01-17 00:38:32.971491 | orchestrator | skipping: [testbed-node-0] 2026-01-17 00:38:32.971503 | orchestrator | skipping: [testbed-node-1] 2026-01-17 00:38:32.971514 | orchestrator | skipping: [testbed-node-2] 2026-01-17 00:38:32.971525 | orchestrator | skipping: [testbed-manager] 2026-01-17 00:38:32.971536 | orchestrator | 2026-01-17 00:38:32.971547 | orchestrator | PLAY [On Ubuntu 24.04 install python3-docker from Debian Sid] ****************** 2026-01-17 00:38:32.971558 | orchestrator | 2026-01-17 00:38:32.971569 | orchestrator | TASK [Install python3-docker] ************************************************** 2026-01-17 00:38:32.971579 | orchestrator | Saturday 17 January 2026 00:38:29 +0000 (0:00:00.652) 0:00:18.643 ****** 2026-01-17 00:38:32.971590 | orchestrator | ok: [testbed-node-4] 2026-01-17 00:38:32.971601 | orchestrator | ok: [testbed-node-5] 2026-01-17 00:38:32.971612 | orchestrator | ok: [testbed-node-3] 2026-01-17 00:38:32.971623 | orchestrator | ok: [testbed-node-0] 2026-01-17 00:38:32.971634 | orchestrator | ok: [testbed-node-1] 2026-01-17 00:38:32.971645 | orchestrator | ok: [testbed-node-2] 2026-01-17 00:38:32.971656 | orchestrator | ok: [testbed-manager] 2026-01-17 00:38:32.971666 | orchestrator | 2026-01-17 00:38:32.971677 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-17 00:38:32.971690 | orchestrator | testbed-manager : ok=7  changed=4  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-01-17 00:38:32.971702 | orchestrator | testbed-node-0 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-17 00:38:32.971713 | orchestrator | testbed-node-1 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-17 00:38:32.971724 | orchestrator | testbed-node-2 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-17 00:38:32.971735 | orchestrator | testbed-node-3 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-17 00:38:32.971757 | orchestrator | testbed-node-4 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-17 00:38:32.971768 | orchestrator | testbed-node-5 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-17 00:38:32.971779 | orchestrator | 2026-01-17 00:38:32.971790 | orchestrator | 2026-01-17 00:38:32.971802 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-17 00:38:32.971813 | orchestrator | Saturday 17 January 2026 00:38:32 +0000 (0:00:02.949) 0:00:21.592 ****** 2026-01-17 00:38:32.971824 | orchestrator | =============================================================================== 2026-01-17 00:38:32.971835 | orchestrator | Run update-ca-certificates ---------------------------------------------- 3.72s 2026-01-17 00:38:32.971846 | orchestrator | Install python3-docker -------------------------------------------------- 2.95s 2026-01-17 00:38:32.971856 | orchestrator | Apply netplan configuration --------------------------------------------- 2.57s 2026-01-17 00:38:32.971867 | orchestrator | Enable workarounds.service (Debian) ------------------------------------- 1.85s 2026-01-17 00:38:32.971878 | orchestrator | Apply netplan configuration --------------------------------------------- 1.79s 2026-01-17 00:38:32.971889 | orchestrator | Copy workarounds.sh scripts --------------------------------------------- 1.61s 2026-01-17 00:38:32.971907 | orchestrator | Copy custom CA certificates --------------------------------------------- 1.61s 2026-01-17 00:38:32.971918 | orchestrator | Reload systemd daemon --------------------------------------------------- 1.60s 2026-01-17 00:38:32.971929 | orchestrator | Copy workarounds systemd unit file -------------------------------------- 1.57s 2026-01-17 00:38:32.971939 | orchestrator | Group hosts based on virtualization_role -------------------------------- 0.79s 2026-01-17 00:38:32.971950 | orchestrator | Run update-ca-trust ----------------------------------------------------- 0.74s 2026-01-17 00:38:32.971972 | orchestrator | Enable and start workarounds.service (RedHat) --------------------------- 0.65s 2026-01-17 00:38:33.741971 | orchestrator | + osism apply reboot -l testbed-nodes -e ireallymeanit=yes 2026-01-17 00:38:45.947852 | orchestrator | 2026-01-17 00:38:45 | INFO  | Task 623d22f1-7771-45ea-987c-6223385059a1 (reboot) was prepared for execution. 2026-01-17 00:38:45.947983 | orchestrator | 2026-01-17 00:38:45 | INFO  | It takes a moment until task 623d22f1-7771-45ea-987c-6223385059a1 (reboot) has been started and output is visible here. 2026-01-17 00:38:56.562888 | orchestrator | 2026-01-17 00:38:56.562989 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-01-17 00:38:56.563006 | orchestrator | 2026-01-17 00:38:56.563018 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-01-17 00:38:56.563030 | orchestrator | Saturday 17 January 2026 00:38:50 +0000 (0:00:00.207) 0:00:00.207 ****** 2026-01-17 00:38:56.563040 | orchestrator | skipping: [testbed-node-0] 2026-01-17 00:38:56.563052 | orchestrator | 2026-01-17 00:38:56.563062 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-01-17 00:38:56.563073 | orchestrator | Saturday 17 January 2026 00:38:50 +0000 (0:00:00.106) 0:00:00.313 ****** 2026-01-17 00:38:56.563083 | orchestrator | changed: [testbed-node-0] 2026-01-17 00:38:56.563094 | orchestrator | 2026-01-17 00:38:56.563104 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-01-17 00:38:56.563115 | orchestrator | Saturday 17 January 2026 00:38:51 +0000 (0:00:00.987) 0:00:01.301 ****** 2026-01-17 00:38:56.563126 | orchestrator | skipping: [testbed-node-0] 2026-01-17 00:38:56.563136 | orchestrator | 2026-01-17 00:38:56.563146 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-01-17 00:38:56.563157 | orchestrator | 2026-01-17 00:38:56.563167 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-01-17 00:38:56.563178 | orchestrator | Saturday 17 January 2026 00:38:51 +0000 (0:00:00.127) 0:00:01.429 ****** 2026-01-17 00:38:56.563214 | orchestrator | skipping: [testbed-node-1] 2026-01-17 00:38:56.563225 | orchestrator | 2026-01-17 00:38:56.563236 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-01-17 00:38:56.563246 | orchestrator | Saturday 17 January 2026 00:38:51 +0000 (0:00:00.128) 0:00:01.558 ****** 2026-01-17 00:38:56.563256 | orchestrator | changed: [testbed-node-1] 2026-01-17 00:38:56.563266 | orchestrator | 2026-01-17 00:38:56.563277 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-01-17 00:38:56.563288 | orchestrator | Saturday 17 January 2026 00:38:52 +0000 (0:00:00.709) 0:00:02.267 ****** 2026-01-17 00:38:56.563299 | orchestrator | skipping: [testbed-node-1] 2026-01-17 00:38:56.563309 | orchestrator | 2026-01-17 00:38:56.563320 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-01-17 00:38:56.563330 | orchestrator | 2026-01-17 00:38:56.563341 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-01-17 00:38:56.563351 | orchestrator | Saturday 17 January 2026 00:38:52 +0000 (0:00:00.130) 0:00:02.398 ****** 2026-01-17 00:38:56.563361 | orchestrator | skipping: [testbed-node-2] 2026-01-17 00:38:56.563371 | orchestrator | 2026-01-17 00:38:56.563383 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-01-17 00:38:56.563393 | orchestrator | Saturday 17 January 2026 00:38:52 +0000 (0:00:00.215) 0:00:02.613 ****** 2026-01-17 00:38:56.563403 | orchestrator | changed: [testbed-node-2] 2026-01-17 00:38:56.563413 | orchestrator | 2026-01-17 00:38:56.563424 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-01-17 00:38:56.563435 | orchestrator | Saturday 17 January 2026 00:38:53 +0000 (0:00:00.688) 0:00:03.301 ****** 2026-01-17 00:38:56.563446 | orchestrator | skipping: [testbed-node-2] 2026-01-17 00:38:56.563456 | orchestrator | 2026-01-17 00:38:56.563467 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-01-17 00:38:56.563502 | orchestrator | 2026-01-17 00:38:56.563513 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-01-17 00:38:56.563525 | orchestrator | Saturday 17 January 2026 00:38:53 +0000 (0:00:00.110) 0:00:03.412 ****** 2026-01-17 00:38:56.563536 | orchestrator | skipping: [testbed-node-3] 2026-01-17 00:38:56.563546 | orchestrator | 2026-01-17 00:38:56.563557 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-01-17 00:38:56.563567 | orchestrator | Saturday 17 January 2026 00:38:53 +0000 (0:00:00.108) 0:00:03.520 ****** 2026-01-17 00:38:56.563577 | orchestrator | changed: [testbed-node-3] 2026-01-17 00:38:56.563588 | orchestrator | 2026-01-17 00:38:56.563599 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-01-17 00:38:56.563609 | orchestrator | Saturday 17 January 2026 00:38:54 +0000 (0:00:00.700) 0:00:04.221 ****** 2026-01-17 00:38:56.563620 | orchestrator | skipping: [testbed-node-3] 2026-01-17 00:38:56.563631 | orchestrator | 2026-01-17 00:38:56.563642 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-01-17 00:38:56.563652 | orchestrator | 2026-01-17 00:38:56.563663 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-01-17 00:38:56.563673 | orchestrator | Saturday 17 January 2026 00:38:54 +0000 (0:00:00.136) 0:00:04.357 ****** 2026-01-17 00:38:56.563684 | orchestrator | skipping: [testbed-node-4] 2026-01-17 00:38:56.563695 | orchestrator | 2026-01-17 00:38:56.563705 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-01-17 00:38:56.563715 | orchestrator | Saturday 17 January 2026 00:38:54 +0000 (0:00:00.124) 0:00:04.482 ****** 2026-01-17 00:38:56.563726 | orchestrator | changed: [testbed-node-4] 2026-01-17 00:38:56.563736 | orchestrator | 2026-01-17 00:38:56.563775 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-01-17 00:38:56.563787 | orchestrator | Saturday 17 January 2026 00:38:55 +0000 (0:00:00.636) 0:00:05.119 ****** 2026-01-17 00:38:56.563799 | orchestrator | skipping: [testbed-node-4] 2026-01-17 00:38:56.563810 | orchestrator | 2026-01-17 00:38:56.563821 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-01-17 00:38:56.563843 | orchestrator | 2026-01-17 00:38:56.563854 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-01-17 00:38:56.563865 | orchestrator | Saturday 17 January 2026 00:38:55 +0000 (0:00:00.133) 0:00:05.252 ****** 2026-01-17 00:38:56.563876 | orchestrator | skipping: [testbed-node-5] 2026-01-17 00:38:56.563885 | orchestrator | 2026-01-17 00:38:56.563895 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-01-17 00:38:56.563907 | orchestrator | Saturday 17 January 2026 00:38:55 +0000 (0:00:00.130) 0:00:05.383 ****** 2026-01-17 00:38:56.563917 | orchestrator | changed: [testbed-node-5] 2026-01-17 00:38:56.563927 | orchestrator | 2026-01-17 00:38:56.563933 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-01-17 00:38:56.563939 | orchestrator | Saturday 17 January 2026 00:38:56 +0000 (0:00:00.699) 0:00:06.083 ****** 2026-01-17 00:38:56.563961 | orchestrator | skipping: [testbed-node-5] 2026-01-17 00:38:56.563967 | orchestrator | 2026-01-17 00:38:56.563974 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-17 00:38:56.563981 | orchestrator | testbed-node-0 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-17 00:38:56.563988 | orchestrator | testbed-node-1 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-17 00:38:56.563994 | orchestrator | testbed-node-2 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-17 00:38:56.564000 | orchestrator | testbed-node-3 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-17 00:38:56.564006 | orchestrator | testbed-node-4 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-17 00:38:56.564012 | orchestrator | testbed-node-5 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-17 00:38:56.564018 | orchestrator | 2026-01-17 00:38:56.564024 | orchestrator | 2026-01-17 00:38:56.564031 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-17 00:38:56.564037 | orchestrator | Saturday 17 January 2026 00:38:56 +0000 (0:00:00.044) 0:00:06.127 ****** 2026-01-17 00:38:56.564043 | orchestrator | =============================================================================== 2026-01-17 00:38:56.564049 | orchestrator | Reboot system - do not wait for the reboot to complete ------------------ 4.42s 2026-01-17 00:38:56.564055 | orchestrator | Exit playbook, if user did not mean to reboot systems ------------------- 0.81s 2026-01-17 00:38:56.564061 | orchestrator | Reboot system - wait for the reboot to complete ------------------------- 0.68s 2026-01-17 00:38:56.893987 | orchestrator | + osism apply wait-for-connection -l testbed-nodes -e ireallymeanit=yes 2026-01-17 00:39:09.072279 | orchestrator | 2026-01-17 00:39:09 | INFO  | Task cf1fb58a-f842-43d9-a680-34211b8812bd (wait-for-connection) was prepared for execution. 2026-01-17 00:39:09.072386 | orchestrator | 2026-01-17 00:39:09 | INFO  | It takes a moment until task cf1fb58a-f842-43d9-a680-34211b8812bd (wait-for-connection) has been started and output is visible here. 2026-01-17 00:39:25.398922 | orchestrator | 2026-01-17 00:39:25.399010 | orchestrator | PLAY [Wait until remote systems are reachable] ********************************* 2026-01-17 00:39:25.399021 | orchestrator | 2026-01-17 00:39:25.399028 | orchestrator | TASK [Wait until remote system is reachable] *********************************** 2026-01-17 00:39:25.399035 | orchestrator | Saturday 17 January 2026 00:39:13 +0000 (0:00:00.240) 0:00:00.240 ****** 2026-01-17 00:39:25.399042 | orchestrator | ok: [testbed-node-1] 2026-01-17 00:39:25.399049 | orchestrator | ok: [testbed-node-2] 2026-01-17 00:39:25.399055 | orchestrator | ok: [testbed-node-0] 2026-01-17 00:39:25.399061 | orchestrator | ok: [testbed-node-3] 2026-01-17 00:39:25.399088 | orchestrator | ok: [testbed-node-4] 2026-01-17 00:39:25.399095 | orchestrator | ok: [testbed-node-5] 2026-01-17 00:39:25.399101 | orchestrator | 2026-01-17 00:39:25.399107 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-17 00:39:25.399114 | orchestrator | testbed-node-0 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-17 00:39:25.399122 | orchestrator | testbed-node-1 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-17 00:39:25.399128 | orchestrator | testbed-node-2 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-17 00:39:25.399135 | orchestrator | testbed-node-3 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-17 00:39:25.399141 | orchestrator | testbed-node-4 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-17 00:39:25.399159 | orchestrator | testbed-node-5 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-17 00:39:25.399166 | orchestrator | 2026-01-17 00:39:25.399172 | orchestrator | 2026-01-17 00:39:25.399178 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-17 00:39:25.399184 | orchestrator | Saturday 17 January 2026 00:39:25 +0000 (0:00:11.571) 0:00:11.812 ****** 2026-01-17 00:39:25.399190 | orchestrator | =============================================================================== 2026-01-17 00:39:25.399196 | orchestrator | Wait until remote system is reachable ---------------------------------- 11.57s 2026-01-17 00:39:25.718252 | orchestrator | + osism apply hddtemp 2026-01-17 00:39:37.807671 | orchestrator | 2026-01-17 00:39:37 | INFO  | Task 91494b96-6b7c-493c-bbe1-32c4cb97870d (hddtemp) was prepared for execution. 2026-01-17 00:39:37.807789 | orchestrator | 2026-01-17 00:39:37 | INFO  | It takes a moment until task 91494b96-6b7c-493c-bbe1-32c4cb97870d (hddtemp) has been started and output is visible here. 2026-01-17 00:40:06.879942 | orchestrator | 2026-01-17 00:40:06.880051 | orchestrator | PLAY [Apply role hddtemp] ****************************************************** 2026-01-17 00:40:06.880068 | orchestrator | 2026-01-17 00:40:06.880080 | orchestrator | TASK [osism.services.hddtemp : Gather variables for each operating system] ***** 2026-01-17 00:40:06.880092 | orchestrator | Saturday 17 January 2026 00:39:42 +0000 (0:00:00.277) 0:00:00.277 ****** 2026-01-17 00:40:06.880103 | orchestrator | ok: [testbed-manager] 2026-01-17 00:40:06.880115 | orchestrator | ok: [testbed-node-0] 2026-01-17 00:40:06.880126 | orchestrator | ok: [testbed-node-1] 2026-01-17 00:40:06.880137 | orchestrator | ok: [testbed-node-2] 2026-01-17 00:40:06.880147 | orchestrator | ok: [testbed-node-3] 2026-01-17 00:40:06.880158 | orchestrator | ok: [testbed-node-4] 2026-01-17 00:40:06.880169 | orchestrator | ok: [testbed-node-5] 2026-01-17 00:40:06.880179 | orchestrator | 2026-01-17 00:40:06.880190 | orchestrator | TASK [osism.services.hddtemp : Include distribution specific install tasks] **** 2026-01-17 00:40:06.880201 | orchestrator | Saturday 17 January 2026 00:39:43 +0000 (0:00:00.774) 0:00:01.051 ****** 2026-01-17 00:40:06.880214 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/hddtemp/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-01-17 00:40:06.880228 | orchestrator | 2026-01-17 00:40:06.880239 | orchestrator | TASK [osism.services.hddtemp : Remove hddtemp package] ************************* 2026-01-17 00:40:06.880250 | orchestrator | Saturday 17 January 2026 00:39:44 +0000 (0:00:01.216) 0:00:02.268 ****** 2026-01-17 00:40:06.880260 | orchestrator | ok: [testbed-manager] 2026-01-17 00:40:06.880272 | orchestrator | ok: [testbed-node-0] 2026-01-17 00:40:06.880283 | orchestrator | ok: [testbed-node-1] 2026-01-17 00:40:06.880294 | orchestrator | ok: [testbed-node-2] 2026-01-17 00:40:06.880305 | orchestrator | ok: [testbed-node-3] 2026-01-17 00:40:06.880341 | orchestrator | ok: [testbed-node-4] 2026-01-17 00:40:06.880352 | orchestrator | ok: [testbed-node-5] 2026-01-17 00:40:06.880363 | orchestrator | 2026-01-17 00:40:06.880374 | orchestrator | TASK [osism.services.hddtemp : Enable Kernel Module drivetemp] ***************** 2026-01-17 00:40:06.880385 | orchestrator | Saturday 17 January 2026 00:39:46 +0000 (0:00:02.070) 0:00:04.339 ****** 2026-01-17 00:40:06.880395 | orchestrator | changed: [testbed-manager] 2026-01-17 00:40:06.880406 | orchestrator | changed: [testbed-node-0] 2026-01-17 00:40:06.880417 | orchestrator | changed: [testbed-node-1] 2026-01-17 00:40:06.880428 | orchestrator | changed: [testbed-node-2] 2026-01-17 00:40:06.880438 | orchestrator | changed: [testbed-node-3] 2026-01-17 00:40:06.880449 | orchestrator | changed: [testbed-node-4] 2026-01-17 00:40:06.880459 | orchestrator | changed: [testbed-node-5] 2026-01-17 00:40:06.880470 | orchestrator | 2026-01-17 00:40:06.880480 | orchestrator | TASK [osism.services.hddtemp : Check if drivetemp module is available] ********* 2026-01-17 00:40:06.880491 | orchestrator | Saturday 17 January 2026 00:39:47 +0000 (0:00:01.212) 0:00:05.551 ****** 2026-01-17 00:40:06.880502 | orchestrator | ok: [testbed-node-0] 2026-01-17 00:40:06.880512 | orchestrator | ok: [testbed-node-1] 2026-01-17 00:40:06.880523 | orchestrator | ok: [testbed-node-2] 2026-01-17 00:40:06.880533 | orchestrator | ok: [testbed-node-3] 2026-01-17 00:40:06.880544 | orchestrator | ok: [testbed-node-4] 2026-01-17 00:40:06.880554 | orchestrator | ok: [testbed-manager] 2026-01-17 00:40:06.880565 | orchestrator | ok: [testbed-node-5] 2026-01-17 00:40:06.880575 | orchestrator | 2026-01-17 00:40:06.880586 | orchestrator | TASK [osism.services.hddtemp : Load Kernel Module drivetemp] ******************* 2026-01-17 00:40:06.880597 | orchestrator | Saturday 17 January 2026 00:39:48 +0000 (0:00:01.217) 0:00:06.769 ****** 2026-01-17 00:40:06.880608 | orchestrator | skipping: [testbed-node-0] 2026-01-17 00:40:06.880618 | orchestrator | skipping: [testbed-node-1] 2026-01-17 00:40:06.880629 | orchestrator | changed: [testbed-manager] 2026-01-17 00:40:06.880639 | orchestrator | skipping: [testbed-node-2] 2026-01-17 00:40:06.880712 | orchestrator | skipping: [testbed-node-3] 2026-01-17 00:40:06.880725 | orchestrator | skipping: [testbed-node-4] 2026-01-17 00:40:06.880736 | orchestrator | skipping: [testbed-node-5] 2026-01-17 00:40:06.880747 | orchestrator | 2026-01-17 00:40:06.880757 | orchestrator | TASK [osism.services.hddtemp : Install lm-sensors] ***************************** 2026-01-17 00:40:06.880768 | orchestrator | Saturday 17 January 2026 00:39:49 +0000 (0:00:00.864) 0:00:07.634 ****** 2026-01-17 00:40:06.880779 | orchestrator | changed: [testbed-manager] 2026-01-17 00:40:06.880789 | orchestrator | changed: [testbed-node-0] 2026-01-17 00:40:06.880800 | orchestrator | changed: [testbed-node-1] 2026-01-17 00:40:06.880811 | orchestrator | changed: [testbed-node-4] 2026-01-17 00:40:06.880821 | orchestrator | changed: [testbed-node-2] 2026-01-17 00:40:06.880832 | orchestrator | changed: [testbed-node-5] 2026-01-17 00:40:06.880843 | orchestrator | changed: [testbed-node-3] 2026-01-17 00:40:06.880853 | orchestrator | 2026-01-17 00:40:06.880864 | orchestrator | TASK [osism.services.hddtemp : Include distribution specific service tasks] **** 2026-01-17 00:40:06.880875 | orchestrator | Saturday 17 January 2026 00:40:02 +0000 (0:00:13.376) 0:00:21.010 ****** 2026-01-17 00:40:06.880886 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/hddtemp/tasks/service-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-01-17 00:40:06.880897 | orchestrator | 2026-01-17 00:40:06.880922 | orchestrator | TASK [osism.services.hddtemp : Manage lm-sensors service] ********************** 2026-01-17 00:40:06.880934 | orchestrator | Saturday 17 January 2026 00:40:04 +0000 (0:00:01.290) 0:00:22.301 ****** 2026-01-17 00:40:06.880944 | orchestrator | changed: [testbed-manager] 2026-01-17 00:40:06.880955 | orchestrator | changed: [testbed-node-0] 2026-01-17 00:40:06.880966 | orchestrator | changed: [testbed-node-1] 2026-01-17 00:40:06.880977 | orchestrator | changed: [testbed-node-2] 2026-01-17 00:40:06.880987 | orchestrator | changed: [testbed-node-3] 2026-01-17 00:40:06.880998 | orchestrator | changed: [testbed-node-4] 2026-01-17 00:40:06.881017 | orchestrator | changed: [testbed-node-5] 2026-01-17 00:40:06.881027 | orchestrator | 2026-01-17 00:40:06.881038 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-17 00:40:06.881049 | orchestrator | testbed-manager : ok=9  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-17 00:40:06.881081 | orchestrator | testbed-node-0 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-01-17 00:40:06.881093 | orchestrator | testbed-node-1 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-01-17 00:40:06.881104 | orchestrator | testbed-node-2 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-01-17 00:40:06.881114 | orchestrator | testbed-node-3 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-01-17 00:40:06.881125 | orchestrator | testbed-node-4 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-01-17 00:40:06.881135 | orchestrator | testbed-node-5 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-01-17 00:40:06.881146 | orchestrator | 2026-01-17 00:40:06.881157 | orchestrator | 2026-01-17 00:40:06.881168 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-17 00:40:06.881179 | orchestrator | Saturday 17 January 2026 00:40:06 +0000 (0:00:02.045) 0:00:24.347 ****** 2026-01-17 00:40:06.881189 | orchestrator | =============================================================================== 2026-01-17 00:40:06.881200 | orchestrator | osism.services.hddtemp : Install lm-sensors ---------------------------- 13.38s 2026-01-17 00:40:06.881211 | orchestrator | osism.services.hddtemp : Remove hddtemp package ------------------------- 2.07s 2026-01-17 00:40:06.881221 | orchestrator | osism.services.hddtemp : Manage lm-sensors service ---------------------- 2.05s 2026-01-17 00:40:06.881232 | orchestrator | osism.services.hddtemp : Include distribution specific service tasks ---- 1.29s 2026-01-17 00:40:06.881243 | orchestrator | osism.services.hddtemp : Check if drivetemp module is available --------- 1.22s 2026-01-17 00:40:06.881253 | orchestrator | osism.services.hddtemp : Include distribution specific install tasks ---- 1.22s 2026-01-17 00:40:06.881264 | orchestrator | osism.services.hddtemp : Enable Kernel Module drivetemp ----------------- 1.21s 2026-01-17 00:40:06.881274 | orchestrator | osism.services.hddtemp : Load Kernel Module drivetemp ------------------- 0.86s 2026-01-17 00:40:06.881285 | orchestrator | osism.services.hddtemp : Gather variables for each operating system ----- 0.77s 2026-01-17 00:40:07.228165 | orchestrator | ++ semver latest 7.1.1 2026-01-17 00:40:07.286484 | orchestrator | + [[ -1 -ge 0 ]] 2026-01-17 00:40:07.286573 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2026-01-17 00:40:07.286588 | orchestrator | + sudo systemctl restart manager.service 2026-01-17 00:40:21.077481 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2026-01-17 00:40:21.077583 | orchestrator | + wait_for_container_healthy 60 ceph-ansible 2026-01-17 00:40:21.077594 | orchestrator | + local max_attempts=60 2026-01-17 00:40:21.077601 | orchestrator | + local name=ceph-ansible 2026-01-17 00:40:21.077607 | orchestrator | + local attempt_num=1 2026-01-17 00:40:21.077614 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-01-17 00:40:21.114406 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-01-17 00:40:21.114473 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-01-17 00:40:21.114480 | orchestrator | + sleep 5 2026-01-17 00:40:26.118944 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-01-17 00:40:26.154499 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-01-17 00:40:26.154585 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-01-17 00:40:26.154595 | orchestrator | + sleep 5 2026-01-17 00:40:31.157999 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-01-17 00:40:31.185821 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-01-17 00:40:31.185921 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-01-17 00:40:31.185933 | orchestrator | + sleep 5 2026-01-17 00:40:36.189844 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-01-17 00:40:36.235090 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-01-17 00:40:36.235180 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-01-17 00:40:36.235188 | orchestrator | + sleep 5 2026-01-17 00:40:41.241377 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-01-17 00:40:41.283575 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-01-17 00:40:41.283657 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-01-17 00:40:41.283670 | orchestrator | + sleep 5 2026-01-17 00:40:46.289041 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-01-17 00:40:46.328357 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-01-17 00:40:46.328448 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-01-17 00:40:46.328462 | orchestrator | + sleep 5 2026-01-17 00:40:51.334108 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-01-17 00:40:51.363964 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-01-17 00:40:51.364058 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-01-17 00:40:51.364074 | orchestrator | + sleep 5 2026-01-17 00:40:56.369213 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-01-17 00:40:56.414454 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-01-17 00:40:56.414542 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-01-17 00:40:56.414570 | orchestrator | + sleep 5 2026-01-17 00:41:01.417052 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-01-17 00:41:01.450585 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-01-17 00:41:01.450706 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-01-17 00:41:01.450725 | orchestrator | + sleep 5 2026-01-17 00:41:06.455126 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-01-17 00:41:06.496776 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-01-17 00:41:06.496908 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-01-17 00:41:06.496924 | orchestrator | + sleep 5 2026-01-17 00:41:11.502281 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-01-17 00:41:11.541784 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-01-17 00:41:11.542102 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-01-17 00:41:11.542127 | orchestrator | + sleep 5 2026-01-17 00:41:16.545517 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-01-17 00:41:16.585219 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-01-17 00:41:16.585383 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-01-17 00:41:16.585399 | orchestrator | + sleep 5 2026-01-17 00:41:21.590972 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-01-17 00:41:21.635480 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-01-17 00:41:21.635571 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-01-17 00:41:21.635586 | orchestrator | + sleep 5 2026-01-17 00:41:26.640781 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-01-17 00:41:26.684792 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-01-17 00:41:26.684878 | orchestrator | + wait_for_container_healthy 60 kolla-ansible 2026-01-17 00:41:26.685004 | orchestrator | + local max_attempts=60 2026-01-17 00:41:26.685013 | orchestrator | + local name=kolla-ansible 2026-01-17 00:41:26.685017 | orchestrator | + local attempt_num=1 2026-01-17 00:41:26.685910 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' kolla-ansible 2026-01-17 00:41:26.723464 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-01-17 00:41:26.723529 | orchestrator | + wait_for_container_healthy 60 osism-ansible 2026-01-17 00:41:26.723535 | orchestrator | + local max_attempts=60 2026-01-17 00:41:26.723540 | orchestrator | + local name=osism-ansible 2026-01-17 00:41:26.723544 | orchestrator | + local attempt_num=1 2026-01-17 00:41:26.723549 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' osism-ansible 2026-01-17 00:41:26.756251 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-01-17 00:41:26.756359 | orchestrator | + [[ true == \t\r\u\e ]] 2026-01-17 00:41:26.756374 | orchestrator | + sh -c /opt/configuration/scripts/disable-ara.sh 2026-01-17 00:41:26.912480 | orchestrator | ARA in ceph-ansible already disabled. 2026-01-17 00:41:27.055760 | orchestrator | ARA in kolla-ansible already disabled. 2026-01-17 00:41:27.219665 | orchestrator | ARA in osism-ansible already disabled. 2026-01-17 00:41:27.355648 | orchestrator | ARA in osism-kubernetes already disabled. 2026-01-17 00:41:27.356395 | orchestrator | + osism apply gather-facts 2026-01-17 00:41:39.695803 | orchestrator | 2026-01-17 00:41:39 | INFO  | Task ad93027d-7061-449c-9b0a-bfad70442b15 (gather-facts) was prepared for execution. 2026-01-17 00:41:39.695989 | orchestrator | 2026-01-17 00:41:39 | INFO  | It takes a moment until task ad93027d-7061-449c-9b0a-bfad70442b15 (gather-facts) has been started and output is visible here. 2026-01-17 00:41:54.215088 | orchestrator | 2026-01-17 00:41:54.215184 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-01-17 00:41:54.215197 | orchestrator | 2026-01-17 00:41:54.215206 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-01-17 00:41:54.215214 | orchestrator | Saturday 17 January 2026 00:41:43 +0000 (0:00:00.208) 0:00:00.208 ****** 2026-01-17 00:41:54.215222 | orchestrator | ok: [testbed-node-0] 2026-01-17 00:41:54.215230 | orchestrator | ok: [testbed-node-1] 2026-01-17 00:41:54.215238 | orchestrator | ok: [testbed-node-2] 2026-01-17 00:41:54.215245 | orchestrator | ok: [testbed-manager] 2026-01-17 00:41:54.215252 | orchestrator | ok: [testbed-node-3] 2026-01-17 00:41:54.215259 | orchestrator | ok: [testbed-node-4] 2026-01-17 00:41:54.215267 | orchestrator | ok: [testbed-node-5] 2026-01-17 00:41:54.215274 | orchestrator | 2026-01-17 00:41:54.215281 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2026-01-17 00:41:54.215288 | orchestrator | 2026-01-17 00:41:54.215295 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2026-01-17 00:41:54.215303 | orchestrator | Saturday 17 January 2026 00:41:53 +0000 (0:00:09.340) 0:00:09.549 ****** 2026-01-17 00:41:54.215310 | orchestrator | skipping: [testbed-manager] 2026-01-17 00:41:54.215318 | orchestrator | skipping: [testbed-node-0] 2026-01-17 00:41:54.215325 | orchestrator | skipping: [testbed-node-1] 2026-01-17 00:41:54.215332 | orchestrator | skipping: [testbed-node-2] 2026-01-17 00:41:54.215340 | orchestrator | skipping: [testbed-node-3] 2026-01-17 00:41:54.215347 | orchestrator | skipping: [testbed-node-4] 2026-01-17 00:41:54.215354 | orchestrator | skipping: [testbed-node-5] 2026-01-17 00:41:54.215361 | orchestrator | 2026-01-17 00:41:54.215368 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-17 00:41:54.215376 | orchestrator | testbed-manager : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-01-17 00:41:54.215384 | orchestrator | testbed-node-0 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-01-17 00:41:54.215392 | orchestrator | testbed-node-1 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-01-17 00:41:54.215399 | orchestrator | testbed-node-2 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-01-17 00:41:54.215407 | orchestrator | testbed-node-3 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-01-17 00:41:54.215414 | orchestrator | testbed-node-4 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-01-17 00:41:54.215421 | orchestrator | testbed-node-5 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-01-17 00:41:54.215428 | orchestrator | 2026-01-17 00:41:54.215436 | orchestrator | 2026-01-17 00:41:54.215458 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-17 00:41:54.215466 | orchestrator | Saturday 17 January 2026 00:41:53 +0000 (0:00:00.515) 0:00:10.065 ****** 2026-01-17 00:41:54.215473 | orchestrator | =============================================================================== 2026-01-17 00:41:54.215499 | orchestrator | Gathers facts about hosts ----------------------------------------------- 9.34s 2026-01-17 00:41:54.215507 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.52s 2026-01-17 00:41:54.590396 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/001-helpers.sh /usr/local/bin/deploy-helper 2026-01-17 00:41:54.602811 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/500-kubernetes.sh /usr/local/bin/deploy-kubernetes 2026-01-17 00:41:54.619048 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/510-clusterapi.sh /usr/local/bin/deploy-kubernetes-clusterapi 2026-01-17 00:41:54.630777 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/100-ceph-with-ansible.sh /usr/local/bin/deploy-ceph-with-ansible 2026-01-17 00:41:54.643557 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/100-ceph-with-rook.sh /usr/local/bin/deploy-ceph-with-rook 2026-01-17 00:41:54.655548 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/200-infrastructure.sh /usr/local/bin/deploy-infrastructure 2026-01-17 00:41:54.667019 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/300-openstack.sh /usr/local/bin/deploy-openstack 2026-01-17 00:41:54.684241 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/400-monitoring.sh /usr/local/bin/deploy-monitoring 2026-01-17 00:41:54.701877 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/500-kubernetes.sh /usr/local/bin/upgrade-kubernetes 2026-01-17 00:41:54.716792 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/510-clusterapi.sh /usr/local/bin/upgrade-kubernetes-clusterapi 2026-01-17 00:41:54.733574 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/100-ceph-with-ansible.sh /usr/local/bin/upgrade-ceph-with-ansible 2026-01-17 00:41:54.751096 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/100-ceph-with-rook.sh /usr/local/bin/upgrade-ceph-with-rook 2026-01-17 00:41:54.775187 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/200-infrastructure.sh /usr/local/bin/upgrade-infrastructure 2026-01-17 00:41:54.795658 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/300-openstack.sh /usr/local/bin/upgrade-openstack 2026-01-17 00:41:54.810956 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/400-monitoring.sh /usr/local/bin/upgrade-monitoring 2026-01-17 00:41:54.829771 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/300-openstack.sh /usr/local/bin/bootstrap-openstack 2026-01-17 00:41:54.844779 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/301-openstack-octavia-amhpora-image.sh /usr/local/bin/bootstrap-octavia 2026-01-17 00:41:54.862779 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/302-openstack-k8s-clusterapi-images.sh /usr/local/bin/bootstrap-clusterapi 2026-01-17 00:41:54.874002 | orchestrator | + sudo ln -sf /opt/configuration/scripts/disable-local-registry.sh /usr/local/bin/disable-local-registry 2026-01-17 00:41:54.884761 | orchestrator | + sudo ln -sf /opt/configuration/scripts/pull-images.sh /usr/local/bin/pull-images 2026-01-17 00:41:54.901383 | orchestrator | + [[ false == \t\r\u\e ]] 2026-01-17 00:41:55.346531 | orchestrator | ok: Runtime: 0:24:40.951049 2026-01-17 00:41:55.467059 | 2026-01-17 00:41:55.467232 | TASK [Deploy services] 2026-01-17 00:41:55.999533 | orchestrator | skipping: Conditional result was False 2026-01-17 00:41:56.018427 | 2026-01-17 00:41:56.018603 | TASK [Deploy in a nutshell] 2026-01-17 00:41:56.739413 | orchestrator | + set -e 2026-01-17 00:41:56.739531 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-01-17 00:41:56.739542 | orchestrator | ++ export INTERACTIVE=false 2026-01-17 00:41:56.739552 | orchestrator | ++ INTERACTIVE=false 2026-01-17 00:41:56.739558 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-01-17 00:41:56.739564 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-01-17 00:41:56.739570 | orchestrator | + source /opt/manager-vars.sh 2026-01-17 00:41:56.739594 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-01-17 00:41:56.739608 | orchestrator | ++ NUMBER_OF_NODES=6 2026-01-17 00:41:56.739614 | orchestrator | ++ export CEPH_VERSION=reef 2026-01-17 00:41:56.739632 | orchestrator | ++ CEPH_VERSION=reef 2026-01-17 00:41:56.739640 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-01-17 00:41:56.740650 | orchestrator | 2026-01-17 00:41:56.740675 | orchestrator | # PULL IMAGES 2026-01-17 00:41:56.740683 | orchestrator | 2026-01-17 00:41:56.740689 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-01-17 00:41:56.740703 | orchestrator | ++ export MANAGER_VERSION=latest 2026-01-17 00:41:56.740709 | orchestrator | ++ MANAGER_VERSION=latest 2026-01-17 00:41:56.740718 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-01-17 00:41:56.740725 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-01-17 00:41:56.740731 | orchestrator | ++ export ARA=false 2026-01-17 00:41:56.740737 | orchestrator | ++ ARA=false 2026-01-17 00:41:56.740746 | orchestrator | ++ export DEPLOY_MODE=manager 2026-01-17 00:41:56.740756 | orchestrator | ++ DEPLOY_MODE=manager 2026-01-17 00:41:56.740766 | orchestrator | ++ export TEMPEST=true 2026-01-17 00:41:56.740775 | orchestrator | ++ TEMPEST=true 2026-01-17 00:41:56.740784 | orchestrator | ++ export IS_ZUUL=true 2026-01-17 00:41:56.740790 | orchestrator | ++ IS_ZUUL=true 2026-01-17 00:41:56.740797 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.65 2026-01-17 00:41:56.740803 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.65 2026-01-17 00:41:56.740810 | orchestrator | ++ export EXTERNAL_API=false 2026-01-17 00:41:56.740816 | orchestrator | ++ EXTERNAL_API=false 2026-01-17 00:41:56.740822 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-01-17 00:41:56.740829 | orchestrator | ++ IMAGE_USER=ubuntu 2026-01-17 00:41:56.740835 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-01-17 00:41:56.740842 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-01-17 00:41:56.740848 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-01-17 00:41:56.740859 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-01-17 00:41:56.740865 | orchestrator | + echo 2026-01-17 00:41:56.740872 | orchestrator | + echo '# PULL IMAGES' 2026-01-17 00:41:56.740878 | orchestrator | + echo 2026-01-17 00:41:56.741193 | orchestrator | ++ semver latest 7.0.0 2026-01-17 00:41:56.802451 | orchestrator | + [[ -1 -ge 0 ]] 2026-01-17 00:41:56.802703 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2026-01-17 00:41:56.802724 | orchestrator | + osism apply --no-wait -r 2 -e custom pull-images 2026-01-17 00:41:58.782278 | orchestrator | 2026-01-17 00:41:58 | INFO  | Trying to run play pull-images in environment custom 2026-01-17 00:42:08.946674 | orchestrator | 2026-01-17 00:42:08 | INFO  | Task eda119c4-2028-41e8-aec0-4eda6ea009d7 (pull-images) was prepared for execution. 2026-01-17 00:42:08.946788 | orchestrator | 2026-01-17 00:42:08 | INFO  | Task eda119c4-2028-41e8-aec0-4eda6ea009d7 is running in background. No more output. Check ARA for logs. 2026-01-17 00:42:11.235905 | orchestrator | 2026-01-17 00:42:11 | INFO  | Trying to run play wipe-partitions in environment custom 2026-01-17 00:42:21.422186 | orchestrator | 2026-01-17 00:42:21 | INFO  | Task 2493c816-ac59-4192-9392-994d4b4c7fe4 (wipe-partitions) was prepared for execution. 2026-01-17 00:42:21.422371 | orchestrator | 2026-01-17 00:42:21 | INFO  | It takes a moment until task 2493c816-ac59-4192-9392-994d4b4c7fe4 (wipe-partitions) has been started and output is visible here. 2026-01-17 00:42:34.637483 | orchestrator | 2026-01-17 00:42:34.637559 | orchestrator | PLAY [Wipe partitions] ********************************************************* 2026-01-17 00:42:34.637566 | orchestrator | 2026-01-17 00:42:34.637571 | orchestrator | TASK [Find all logical devices owned by UID 167] ******************************* 2026-01-17 00:42:34.637577 | orchestrator | Saturday 17 January 2026 00:42:25 +0000 (0:00:00.133) 0:00:00.133 ****** 2026-01-17 00:42:34.637583 | orchestrator | changed: [testbed-node-4] 2026-01-17 00:42:34.637588 | orchestrator | changed: [testbed-node-3] 2026-01-17 00:42:34.637593 | orchestrator | changed: [testbed-node-5] 2026-01-17 00:42:34.637597 | orchestrator | 2026-01-17 00:42:34.637601 | orchestrator | TASK [Remove all rook related logical devices] ********************************* 2026-01-17 00:42:34.637624 | orchestrator | Saturday 17 January 2026 00:42:26 +0000 (0:00:00.636) 0:00:00.770 ****** 2026-01-17 00:42:34.637629 | orchestrator | skipping: [testbed-node-3] 2026-01-17 00:42:34.637633 | orchestrator | skipping: [testbed-node-4] 2026-01-17 00:42:34.637639 | orchestrator | skipping: [testbed-node-5] 2026-01-17 00:42:34.637643 | orchestrator | 2026-01-17 00:42:34.637647 | orchestrator | TASK [Find all logical devices with prefix ceph] ******************************* 2026-01-17 00:42:34.637651 | orchestrator | Saturday 17 January 2026 00:42:27 +0000 (0:00:00.382) 0:00:01.153 ****** 2026-01-17 00:42:34.637655 | orchestrator | ok: [testbed-node-4] 2026-01-17 00:42:34.637660 | orchestrator | ok: [testbed-node-3] 2026-01-17 00:42:34.637664 | orchestrator | ok: [testbed-node-5] 2026-01-17 00:42:34.637668 | orchestrator | 2026-01-17 00:42:34.637672 | orchestrator | TASK [Remove all ceph related logical devices] ********************************* 2026-01-17 00:42:34.637676 | orchestrator | Saturday 17 January 2026 00:42:27 +0000 (0:00:00.712) 0:00:01.865 ****** 2026-01-17 00:42:34.637680 | orchestrator | skipping: [testbed-node-3] 2026-01-17 00:42:34.637684 | orchestrator | skipping: [testbed-node-4] 2026-01-17 00:42:34.637687 | orchestrator | skipping: [testbed-node-5] 2026-01-17 00:42:34.637691 | orchestrator | 2026-01-17 00:42:34.637695 | orchestrator | TASK [Check device availability] *********************************************** 2026-01-17 00:42:34.637699 | orchestrator | Saturday 17 January 2026 00:42:27 +0000 (0:00:00.277) 0:00:02.142 ****** 2026-01-17 00:42:34.637703 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdb) 2026-01-17 00:42:34.637734 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdb) 2026-01-17 00:42:34.637738 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdb) 2026-01-17 00:42:34.637742 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdc) 2026-01-17 00:42:34.637746 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdc) 2026-01-17 00:42:34.637749 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdc) 2026-01-17 00:42:34.637753 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdd) 2026-01-17 00:42:34.637757 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdd) 2026-01-17 00:42:34.637761 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdd) 2026-01-17 00:42:34.637765 | orchestrator | 2026-01-17 00:42:34.637768 | orchestrator | TASK [Wipe partitions with wipefs] ********************************************* 2026-01-17 00:42:34.637772 | orchestrator | Saturday 17 January 2026 00:42:29 +0000 (0:00:01.253) 0:00:03.396 ****** 2026-01-17 00:42:34.637776 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdb) 2026-01-17 00:42:34.637780 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdb) 2026-01-17 00:42:34.637784 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdb) 2026-01-17 00:42:34.637788 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdc) 2026-01-17 00:42:34.637792 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdc) 2026-01-17 00:42:34.637796 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdc) 2026-01-17 00:42:34.637799 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdd) 2026-01-17 00:42:34.637803 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdd) 2026-01-17 00:42:34.637807 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdd) 2026-01-17 00:42:34.637811 | orchestrator | 2026-01-17 00:42:34.637814 | orchestrator | TASK [Overwrite first 32M with zeros] ****************************************** 2026-01-17 00:42:34.637818 | orchestrator | Saturday 17 January 2026 00:42:30 +0000 (0:00:01.488) 0:00:04.884 ****** 2026-01-17 00:42:34.637822 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdb) 2026-01-17 00:42:34.637826 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdb) 2026-01-17 00:42:34.637830 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdb) 2026-01-17 00:42:34.637834 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdc) 2026-01-17 00:42:34.637838 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdc) 2026-01-17 00:42:34.637845 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdc) 2026-01-17 00:42:34.637849 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdd) 2026-01-17 00:42:34.637858 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdd) 2026-01-17 00:42:34.637862 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdd) 2026-01-17 00:42:34.637865 | orchestrator | 2026-01-17 00:42:34.637869 | orchestrator | TASK [Reload udev rules] ******************************************************* 2026-01-17 00:42:34.637873 | orchestrator | Saturday 17 January 2026 00:42:32 +0000 (0:00:02.195) 0:00:07.079 ****** 2026-01-17 00:42:34.637877 | orchestrator | changed: [testbed-node-3] 2026-01-17 00:42:34.637881 | orchestrator | changed: [testbed-node-4] 2026-01-17 00:42:34.637884 | orchestrator | changed: [testbed-node-5] 2026-01-17 00:42:34.637888 | orchestrator | 2026-01-17 00:42:34.637892 | orchestrator | TASK [Request device events from the kernel] *********************************** 2026-01-17 00:42:34.637896 | orchestrator | Saturday 17 January 2026 00:42:33 +0000 (0:00:00.639) 0:00:07.719 ****** 2026-01-17 00:42:34.637899 | orchestrator | changed: [testbed-node-3] 2026-01-17 00:42:34.637903 | orchestrator | changed: [testbed-node-4] 2026-01-17 00:42:34.637907 | orchestrator | changed: [testbed-node-5] 2026-01-17 00:42:34.637911 | orchestrator | 2026-01-17 00:42:34.637915 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-17 00:42:34.637920 | orchestrator | testbed-node-3 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-17 00:42:34.637925 | orchestrator | testbed-node-4 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-17 00:42:34.637938 | orchestrator | testbed-node-5 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-17 00:42:34.637943 | orchestrator | 2026-01-17 00:42:34.637946 | orchestrator | 2026-01-17 00:42:34.637950 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-17 00:42:34.637954 | orchestrator | Saturday 17 January 2026 00:42:34 +0000 (0:00:00.653) 0:00:08.373 ****** 2026-01-17 00:42:34.637958 | orchestrator | =============================================================================== 2026-01-17 00:42:34.637961 | orchestrator | Overwrite first 32M with zeros ------------------------------------------ 2.20s 2026-01-17 00:42:34.637965 | orchestrator | Wipe partitions with wipefs --------------------------------------------- 1.49s 2026-01-17 00:42:34.637969 | orchestrator | Check device availability ----------------------------------------------- 1.25s 2026-01-17 00:42:34.637973 | orchestrator | Find all logical devices with prefix ceph ------------------------------- 0.71s 2026-01-17 00:42:34.637977 | orchestrator | Request device events from the kernel ----------------------------------- 0.65s 2026-01-17 00:42:34.637980 | orchestrator | Reload udev rules ------------------------------------------------------- 0.64s 2026-01-17 00:42:34.638059 | orchestrator | Find all logical devices owned by UID 167 ------------------------------- 0.64s 2026-01-17 00:42:34.638065 | orchestrator | Remove all rook related logical devices --------------------------------- 0.38s 2026-01-17 00:42:34.638069 | orchestrator | Remove all ceph related logical devices --------------------------------- 0.28s 2026-01-17 00:42:47.182641 | orchestrator | 2026-01-17 00:42:47 | INFO  | Task ec14771d-eeb0-4cc2-a02e-53bcfd543598 (facts) was prepared for execution. 2026-01-17 00:42:47.182736 | orchestrator | 2026-01-17 00:42:47 | INFO  | It takes a moment until task ec14771d-eeb0-4cc2-a02e-53bcfd543598 (facts) has been started and output is visible here. 2026-01-17 00:43:00.696777 | orchestrator | 2026-01-17 00:43:00.696857 | orchestrator | PLAY [Apply role facts] ******************************************************** 2026-01-17 00:43:00.696866 | orchestrator | 2026-01-17 00:43:00.696871 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2026-01-17 00:43:00.696877 | orchestrator | Saturday 17 January 2026 00:42:51 +0000 (0:00:00.292) 0:00:00.292 ****** 2026-01-17 00:43:00.696882 | orchestrator | ok: [testbed-node-0] 2026-01-17 00:43:00.696895 | orchestrator | ok: [testbed-manager] 2026-01-17 00:43:00.696900 | orchestrator | ok: [testbed-node-1] 2026-01-17 00:43:00.696924 | orchestrator | ok: [testbed-node-2] 2026-01-17 00:43:00.696928 | orchestrator | ok: [testbed-node-3] 2026-01-17 00:43:00.696933 | orchestrator | ok: [testbed-node-4] 2026-01-17 00:43:00.696938 | orchestrator | ok: [testbed-node-5] 2026-01-17 00:43:00.696943 | orchestrator | 2026-01-17 00:43:00.696949 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2026-01-17 00:43:00.696954 | orchestrator | Saturday 17 January 2026 00:42:52 +0000 (0:00:01.152) 0:00:01.445 ****** 2026-01-17 00:43:00.696959 | orchestrator | skipping: [testbed-manager] 2026-01-17 00:43:00.696964 | orchestrator | skipping: [testbed-node-0] 2026-01-17 00:43:00.696969 | orchestrator | skipping: [testbed-node-1] 2026-01-17 00:43:00.696973 | orchestrator | skipping: [testbed-node-2] 2026-01-17 00:43:00.696978 | orchestrator | skipping: [testbed-node-3] 2026-01-17 00:43:00.696983 | orchestrator | skipping: [testbed-node-4] 2026-01-17 00:43:00.696987 | orchestrator | skipping: [testbed-node-5] 2026-01-17 00:43:00.696992 | orchestrator | 2026-01-17 00:43:00.696996 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-01-17 00:43:00.697001 | orchestrator | 2026-01-17 00:43:00.697006 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-01-17 00:43:00.697010 | orchestrator | Saturday 17 January 2026 00:42:54 +0000 (0:00:01.338) 0:00:02.783 ****** 2026-01-17 00:43:00.697015 | orchestrator | ok: [testbed-node-1] 2026-01-17 00:43:00.697020 | orchestrator | ok: [testbed-node-0] 2026-01-17 00:43:00.697048 | orchestrator | ok: [testbed-node-2] 2026-01-17 00:43:00.697054 | orchestrator | ok: [testbed-manager] 2026-01-17 00:43:00.697058 | orchestrator | ok: [testbed-node-4] 2026-01-17 00:43:00.697063 | orchestrator | ok: [testbed-node-5] 2026-01-17 00:43:00.697068 | orchestrator | ok: [testbed-node-3] 2026-01-17 00:43:00.697072 | orchestrator | 2026-01-17 00:43:00.697077 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2026-01-17 00:43:00.697081 | orchestrator | 2026-01-17 00:43:00.697086 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2026-01-17 00:43:00.697103 | orchestrator | Saturday 17 January 2026 00:42:59 +0000 (0:00:05.645) 0:00:08.429 ****** 2026-01-17 00:43:00.697108 | orchestrator | skipping: [testbed-manager] 2026-01-17 00:43:00.697112 | orchestrator | skipping: [testbed-node-0] 2026-01-17 00:43:00.697117 | orchestrator | skipping: [testbed-node-1] 2026-01-17 00:43:00.697121 | orchestrator | skipping: [testbed-node-2] 2026-01-17 00:43:00.697126 | orchestrator | skipping: [testbed-node-3] 2026-01-17 00:43:00.697130 | orchestrator | skipping: [testbed-node-4] 2026-01-17 00:43:00.697135 | orchestrator | skipping: [testbed-node-5] 2026-01-17 00:43:00.697140 | orchestrator | 2026-01-17 00:43:00.697145 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-17 00:43:00.697149 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-17 00:43:00.697155 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-17 00:43:00.697160 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-17 00:43:00.697165 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-17 00:43:00.697170 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-17 00:43:00.697175 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-17 00:43:00.697179 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-17 00:43:00.697184 | orchestrator | 2026-01-17 00:43:00.697193 | orchestrator | 2026-01-17 00:43:00.697197 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-17 00:43:00.697202 | orchestrator | Saturday 17 January 2026 00:43:00 +0000 (0:00:00.552) 0:00:08.982 ****** 2026-01-17 00:43:00.697207 | orchestrator | =============================================================================== 2026-01-17 00:43:00.697212 | orchestrator | Gathers facts about hosts ----------------------------------------------- 5.65s 2026-01-17 00:43:00.697216 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.34s 2026-01-17 00:43:00.697221 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 1.15s 2026-01-17 00:43:00.697226 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.55s 2026-01-17 00:43:03.116915 | orchestrator | 2026-01-17 00:43:03 | INFO  | Task bad90b88-1cc8-40e2-b5f6-03388c25e000 (ceph-configure-lvm-volumes) was prepared for execution. 2026-01-17 00:43:03.116996 | orchestrator | 2026-01-17 00:43:03 | INFO  | It takes a moment until task bad90b88-1cc8-40e2-b5f6-03388c25e000 (ceph-configure-lvm-volumes) has been started and output is visible here. 2026-01-17 00:43:15.309846 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-01-17 00:43:15.309952 | orchestrator | 2.16.14 2026-01-17 00:43:15.309970 | orchestrator | 2026-01-17 00:43:15.309984 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2026-01-17 00:43:15.309996 | orchestrator | 2026-01-17 00:43:15.310009 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-01-17 00:43:15.310109 | orchestrator | Saturday 17 January 2026 00:43:07 +0000 (0:00:00.338) 0:00:00.338 ****** 2026-01-17 00:43:15.310122 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-01-17 00:43:15.310133 | orchestrator | 2026-01-17 00:43:15.310144 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-01-17 00:43:15.310155 | orchestrator | Saturday 17 January 2026 00:43:08 +0000 (0:00:00.273) 0:00:00.612 ****** 2026-01-17 00:43:15.310167 | orchestrator | ok: [testbed-node-3] 2026-01-17 00:43:15.310178 | orchestrator | 2026-01-17 00:43:15.310190 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-17 00:43:15.310201 | orchestrator | Saturday 17 January 2026 00:43:08 +0000 (0:00:00.210) 0:00:00.823 ****** 2026-01-17 00:43:15.310213 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop0) 2026-01-17 00:43:15.310224 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop1) 2026-01-17 00:43:15.310235 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop2) 2026-01-17 00:43:15.310246 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop3) 2026-01-17 00:43:15.310257 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop4) 2026-01-17 00:43:15.310268 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop5) 2026-01-17 00:43:15.310279 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop6) 2026-01-17 00:43:15.310290 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop7) 2026-01-17 00:43:15.310301 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sda) 2026-01-17 00:43:15.310313 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdb) 2026-01-17 00:43:15.310333 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdc) 2026-01-17 00:43:15.310344 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdd) 2026-01-17 00:43:15.310356 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sr0) 2026-01-17 00:43:15.310367 | orchestrator | 2026-01-17 00:43:15.310379 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-17 00:43:15.310417 | orchestrator | Saturday 17 January 2026 00:43:08 +0000 (0:00:00.505) 0:00:01.328 ****** 2026-01-17 00:43:15.310430 | orchestrator | skipping: [testbed-node-3] 2026-01-17 00:43:15.310444 | orchestrator | 2026-01-17 00:43:15.310456 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-17 00:43:15.310469 | orchestrator | Saturday 17 January 2026 00:43:08 +0000 (0:00:00.207) 0:00:01.536 ****** 2026-01-17 00:43:15.310482 | orchestrator | skipping: [testbed-node-3] 2026-01-17 00:43:15.310495 | orchestrator | 2026-01-17 00:43:15.310507 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-17 00:43:15.310519 | orchestrator | Saturday 17 January 2026 00:43:09 +0000 (0:00:00.184) 0:00:01.720 ****** 2026-01-17 00:43:15.310532 | orchestrator | skipping: [testbed-node-3] 2026-01-17 00:43:15.310544 | orchestrator | 2026-01-17 00:43:15.310556 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-17 00:43:15.310573 | orchestrator | Saturday 17 January 2026 00:43:09 +0000 (0:00:00.201) 0:00:01.922 ****** 2026-01-17 00:43:15.310585 | orchestrator | skipping: [testbed-node-3] 2026-01-17 00:43:15.310598 | orchestrator | 2026-01-17 00:43:15.310610 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-17 00:43:15.310623 | orchestrator | Saturday 17 January 2026 00:43:09 +0000 (0:00:00.196) 0:00:02.119 ****** 2026-01-17 00:43:15.310635 | orchestrator | skipping: [testbed-node-3] 2026-01-17 00:43:15.310647 | orchestrator | 2026-01-17 00:43:15.310659 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-17 00:43:15.310672 | orchestrator | Saturday 17 January 2026 00:43:09 +0000 (0:00:00.207) 0:00:02.326 ****** 2026-01-17 00:43:15.310684 | orchestrator | skipping: [testbed-node-3] 2026-01-17 00:43:15.310697 | orchestrator | 2026-01-17 00:43:15.310710 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-17 00:43:15.310723 | orchestrator | Saturday 17 January 2026 00:43:09 +0000 (0:00:00.202) 0:00:02.529 ****** 2026-01-17 00:43:15.310735 | orchestrator | skipping: [testbed-node-3] 2026-01-17 00:43:15.310746 | orchestrator | 2026-01-17 00:43:15.310757 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-17 00:43:15.310768 | orchestrator | Saturday 17 January 2026 00:43:10 +0000 (0:00:00.268) 0:00:02.798 ****** 2026-01-17 00:43:15.310778 | orchestrator | skipping: [testbed-node-3] 2026-01-17 00:43:15.310789 | orchestrator | 2026-01-17 00:43:15.310800 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-17 00:43:15.310811 | orchestrator | Saturday 17 January 2026 00:43:10 +0000 (0:00:00.205) 0:00:03.003 ****** 2026-01-17 00:43:15.310821 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_a81b0aae-ecd3-46bc-81d3-c119638f529b) 2026-01-17 00:43:15.310834 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_a81b0aae-ecd3-46bc-81d3-c119638f529b) 2026-01-17 00:43:15.310844 | orchestrator | 2026-01-17 00:43:15.310855 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-17 00:43:15.310885 | orchestrator | Saturday 17 January 2026 00:43:10 +0000 (0:00:00.417) 0:00:03.421 ****** 2026-01-17 00:43:15.310897 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_03c99a05-96d9-4471-aa9e-2837c3fbd541) 2026-01-17 00:43:15.310908 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_03c99a05-96d9-4471-aa9e-2837c3fbd541) 2026-01-17 00:43:15.310919 | orchestrator | 2026-01-17 00:43:15.310929 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-17 00:43:15.310940 | orchestrator | Saturday 17 January 2026 00:43:11 +0000 (0:00:00.672) 0:00:04.094 ****** 2026-01-17 00:43:15.310951 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_386eb8af-61b6-405b-8873-9456a29b0ccf) 2026-01-17 00:43:15.310962 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_386eb8af-61b6-405b-8873-9456a29b0ccf) 2026-01-17 00:43:15.310972 | orchestrator | 2026-01-17 00:43:15.310983 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-17 00:43:15.311001 | orchestrator | Saturday 17 January 2026 00:43:12 +0000 (0:00:00.774) 0:00:04.868 ****** 2026-01-17 00:43:15.311012 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_66cad329-aa8c-4366-8769-2bca3a7bcb41) 2026-01-17 00:43:15.311023 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_66cad329-aa8c-4366-8769-2bca3a7bcb41) 2026-01-17 00:43:15.311033 | orchestrator | 2026-01-17 00:43:15.311113 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-17 00:43:15.311128 | orchestrator | Saturday 17 January 2026 00:43:13 +0000 (0:00:00.902) 0:00:05.771 ****** 2026-01-17 00:43:15.311139 | orchestrator | ok: [testbed-node-3] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-01-17 00:43:15.311149 | orchestrator | 2026-01-17 00:43:15.311167 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-17 00:43:15.311178 | orchestrator | Saturday 17 January 2026 00:43:13 +0000 (0:00:00.348) 0:00:06.120 ****** 2026-01-17 00:43:15.311188 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop0) 2026-01-17 00:43:15.311199 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop1) 2026-01-17 00:43:15.311210 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop2) 2026-01-17 00:43:15.311220 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop3) 2026-01-17 00:43:15.311231 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop4) 2026-01-17 00:43:15.311241 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop5) 2026-01-17 00:43:15.311252 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop6) 2026-01-17 00:43:15.311262 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop7) 2026-01-17 00:43:15.311273 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sda) 2026-01-17 00:43:15.311283 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdb) 2026-01-17 00:43:15.311294 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdc) 2026-01-17 00:43:15.311305 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdd) 2026-01-17 00:43:15.311315 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sr0) 2026-01-17 00:43:15.311326 | orchestrator | 2026-01-17 00:43:15.311337 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-17 00:43:15.311348 | orchestrator | Saturday 17 January 2026 00:43:13 +0000 (0:00:00.422) 0:00:06.543 ****** 2026-01-17 00:43:15.311358 | orchestrator | skipping: [testbed-node-3] 2026-01-17 00:43:15.311369 | orchestrator | 2026-01-17 00:43:15.311379 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-17 00:43:15.311390 | orchestrator | Saturday 17 January 2026 00:43:14 +0000 (0:00:00.222) 0:00:06.765 ****** 2026-01-17 00:43:15.311400 | orchestrator | skipping: [testbed-node-3] 2026-01-17 00:43:15.311411 | orchestrator | 2026-01-17 00:43:15.311421 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-17 00:43:15.311432 | orchestrator | Saturday 17 January 2026 00:43:14 +0000 (0:00:00.195) 0:00:06.960 ****** 2026-01-17 00:43:15.311443 | orchestrator | skipping: [testbed-node-3] 2026-01-17 00:43:15.311453 | orchestrator | 2026-01-17 00:43:15.311464 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-17 00:43:15.311475 | orchestrator | Saturday 17 January 2026 00:43:14 +0000 (0:00:00.209) 0:00:07.170 ****** 2026-01-17 00:43:15.311485 | orchestrator | skipping: [testbed-node-3] 2026-01-17 00:43:15.311496 | orchestrator | 2026-01-17 00:43:15.311507 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-17 00:43:15.311517 | orchestrator | Saturday 17 January 2026 00:43:14 +0000 (0:00:00.210) 0:00:07.381 ****** 2026-01-17 00:43:15.311535 | orchestrator | skipping: [testbed-node-3] 2026-01-17 00:43:15.311546 | orchestrator | 2026-01-17 00:43:15.311557 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-17 00:43:15.311568 | orchestrator | Saturday 17 January 2026 00:43:14 +0000 (0:00:00.163) 0:00:07.545 ****** 2026-01-17 00:43:15.311578 | orchestrator | skipping: [testbed-node-3] 2026-01-17 00:43:15.311589 | orchestrator | 2026-01-17 00:43:15.311599 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-17 00:43:15.311610 | orchestrator | Saturday 17 January 2026 00:43:15 +0000 (0:00:00.178) 0:00:07.724 ****** 2026-01-17 00:43:15.311621 | orchestrator | skipping: [testbed-node-3] 2026-01-17 00:43:15.311631 | orchestrator | 2026-01-17 00:43:15.311648 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-17 00:43:22.159266 | orchestrator | Saturday 17 January 2026 00:43:15 +0000 (0:00:00.177) 0:00:07.901 ****** 2026-01-17 00:43:22.159386 | orchestrator | skipping: [testbed-node-3] 2026-01-17 00:43:22.159404 | orchestrator | 2026-01-17 00:43:22.159417 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-17 00:43:22.159429 | orchestrator | Saturday 17 January 2026 00:43:15 +0000 (0:00:00.172) 0:00:08.074 ****** 2026-01-17 00:43:22.159440 | orchestrator | ok: [testbed-node-3] => (item=sda1) 2026-01-17 00:43:22.159452 | orchestrator | ok: [testbed-node-3] => (item=sda14) 2026-01-17 00:43:22.159463 | orchestrator | ok: [testbed-node-3] => (item=sda15) 2026-01-17 00:43:22.159474 | orchestrator | ok: [testbed-node-3] => (item=sda16) 2026-01-17 00:43:22.159485 | orchestrator | 2026-01-17 00:43:22.159496 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-17 00:43:22.159506 | orchestrator | Saturday 17 January 2026 00:43:16 +0000 (0:00:00.932) 0:00:09.006 ****** 2026-01-17 00:43:22.159517 | orchestrator | skipping: [testbed-node-3] 2026-01-17 00:43:22.159528 | orchestrator | 2026-01-17 00:43:22.159539 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-17 00:43:22.159549 | orchestrator | Saturday 17 January 2026 00:43:16 +0000 (0:00:00.203) 0:00:09.210 ****** 2026-01-17 00:43:22.159560 | orchestrator | skipping: [testbed-node-3] 2026-01-17 00:43:22.159571 | orchestrator | 2026-01-17 00:43:22.159581 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-17 00:43:22.159592 | orchestrator | Saturday 17 January 2026 00:43:16 +0000 (0:00:00.183) 0:00:09.393 ****** 2026-01-17 00:43:22.159603 | orchestrator | skipping: [testbed-node-3] 2026-01-17 00:43:22.159614 | orchestrator | 2026-01-17 00:43:22.159624 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-17 00:43:22.159635 | orchestrator | Saturday 17 January 2026 00:43:17 +0000 (0:00:00.226) 0:00:09.619 ****** 2026-01-17 00:43:22.159646 | orchestrator | skipping: [testbed-node-3] 2026-01-17 00:43:22.159656 | orchestrator | 2026-01-17 00:43:22.159667 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2026-01-17 00:43:22.159677 | orchestrator | Saturday 17 January 2026 00:43:17 +0000 (0:00:00.260) 0:00:09.880 ****** 2026-01-17 00:43:22.159688 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': None}) 2026-01-17 00:43:22.159699 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': None}) 2026-01-17 00:43:22.159710 | orchestrator | 2026-01-17 00:43:22.159740 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2026-01-17 00:43:22.159751 | orchestrator | Saturday 17 January 2026 00:43:17 +0000 (0:00:00.148) 0:00:10.029 ****** 2026-01-17 00:43:22.159762 | orchestrator | skipping: [testbed-node-3] 2026-01-17 00:43:22.159773 | orchestrator | 2026-01-17 00:43:22.159784 | orchestrator | TASK [Generate DB VG names] **************************************************** 2026-01-17 00:43:22.159797 | orchestrator | Saturday 17 January 2026 00:43:17 +0000 (0:00:00.111) 0:00:10.140 ****** 2026-01-17 00:43:22.159810 | orchestrator | skipping: [testbed-node-3] 2026-01-17 00:43:22.159822 | orchestrator | 2026-01-17 00:43:22.159835 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2026-01-17 00:43:22.159871 | orchestrator | Saturday 17 January 2026 00:43:17 +0000 (0:00:00.119) 0:00:10.260 ****** 2026-01-17 00:43:22.159884 | orchestrator | skipping: [testbed-node-3] 2026-01-17 00:43:22.159897 | orchestrator | 2026-01-17 00:43:22.159908 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2026-01-17 00:43:22.159919 | orchestrator | Saturday 17 January 2026 00:43:17 +0000 (0:00:00.107) 0:00:10.367 ****** 2026-01-17 00:43:22.159930 | orchestrator | ok: [testbed-node-3] 2026-01-17 00:43:22.159941 | orchestrator | 2026-01-17 00:43:22.159951 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2026-01-17 00:43:22.159962 | orchestrator | Saturday 17 January 2026 00:43:17 +0000 (0:00:00.127) 0:00:10.495 ****** 2026-01-17 00:43:22.159973 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'c5f49b22-d40f-5ab7-98f7-9762e23da2c0'}}) 2026-01-17 00:43:22.159984 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '2051e43b-6678-567a-85ad-b7e1187d21ae'}}) 2026-01-17 00:43:22.159995 | orchestrator | 2026-01-17 00:43:22.160005 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2026-01-17 00:43:22.160016 | orchestrator | Saturday 17 January 2026 00:43:18 +0000 (0:00:00.138) 0:00:10.634 ****** 2026-01-17 00:43:22.160028 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'c5f49b22-d40f-5ab7-98f7-9762e23da2c0'}})  2026-01-17 00:43:22.160046 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '2051e43b-6678-567a-85ad-b7e1187d21ae'}})  2026-01-17 00:43:22.160080 | orchestrator | skipping: [testbed-node-3] 2026-01-17 00:43:22.160092 | orchestrator | 2026-01-17 00:43:22.160103 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2026-01-17 00:43:22.160113 | orchestrator | Saturday 17 January 2026 00:43:18 +0000 (0:00:00.123) 0:00:10.758 ****** 2026-01-17 00:43:22.160124 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'c5f49b22-d40f-5ab7-98f7-9762e23da2c0'}})  2026-01-17 00:43:22.160135 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '2051e43b-6678-567a-85ad-b7e1187d21ae'}})  2026-01-17 00:43:22.160146 | orchestrator | skipping: [testbed-node-3] 2026-01-17 00:43:22.160157 | orchestrator | 2026-01-17 00:43:22.160167 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2026-01-17 00:43:22.160178 | orchestrator | Saturday 17 January 2026 00:43:18 +0000 (0:00:00.264) 0:00:11.022 ****** 2026-01-17 00:43:22.160189 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'c5f49b22-d40f-5ab7-98f7-9762e23da2c0'}})  2026-01-17 00:43:22.160218 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '2051e43b-6678-567a-85ad-b7e1187d21ae'}})  2026-01-17 00:43:22.160229 | orchestrator | skipping: [testbed-node-3] 2026-01-17 00:43:22.160240 | orchestrator | 2026-01-17 00:43:22.160251 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2026-01-17 00:43:22.160268 | orchestrator | Saturday 17 January 2026 00:43:18 +0000 (0:00:00.142) 0:00:11.165 ****** 2026-01-17 00:43:22.160279 | orchestrator | ok: [testbed-node-3] 2026-01-17 00:43:22.160289 | orchestrator | 2026-01-17 00:43:22.160300 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2026-01-17 00:43:22.160310 | orchestrator | Saturday 17 January 2026 00:43:18 +0000 (0:00:00.118) 0:00:11.284 ****** 2026-01-17 00:43:22.160321 | orchestrator | ok: [testbed-node-3] 2026-01-17 00:43:22.160331 | orchestrator | 2026-01-17 00:43:22.160342 | orchestrator | TASK [Set DB devices config data] ********************************************** 2026-01-17 00:43:22.160353 | orchestrator | Saturday 17 January 2026 00:43:18 +0000 (0:00:00.141) 0:00:11.425 ****** 2026-01-17 00:43:22.160363 | orchestrator | skipping: [testbed-node-3] 2026-01-17 00:43:22.160374 | orchestrator | 2026-01-17 00:43:22.160384 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2026-01-17 00:43:22.160395 | orchestrator | Saturday 17 January 2026 00:43:19 +0000 (0:00:00.276) 0:00:11.702 ****** 2026-01-17 00:43:22.160414 | orchestrator | skipping: [testbed-node-3] 2026-01-17 00:43:22.160425 | orchestrator | 2026-01-17 00:43:22.160436 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2026-01-17 00:43:22.160446 | orchestrator | Saturday 17 January 2026 00:43:19 +0000 (0:00:00.172) 0:00:11.875 ****** 2026-01-17 00:43:22.160457 | orchestrator | skipping: [testbed-node-3] 2026-01-17 00:43:22.160468 | orchestrator | 2026-01-17 00:43:22.160478 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2026-01-17 00:43:22.160489 | orchestrator | Saturday 17 January 2026 00:43:19 +0000 (0:00:00.115) 0:00:11.991 ****** 2026-01-17 00:43:22.160499 | orchestrator | ok: [testbed-node-3] => { 2026-01-17 00:43:22.160510 | orchestrator |  "ceph_osd_devices": { 2026-01-17 00:43:22.160521 | orchestrator |  "sdb": { 2026-01-17 00:43:22.160533 | orchestrator |  "osd_lvm_uuid": "c5f49b22-d40f-5ab7-98f7-9762e23da2c0" 2026-01-17 00:43:22.160544 | orchestrator |  }, 2026-01-17 00:43:22.160555 | orchestrator |  "sdc": { 2026-01-17 00:43:22.160566 | orchestrator |  "osd_lvm_uuid": "2051e43b-6678-567a-85ad-b7e1187d21ae" 2026-01-17 00:43:22.160577 | orchestrator |  } 2026-01-17 00:43:22.160587 | orchestrator |  } 2026-01-17 00:43:22.160598 | orchestrator | } 2026-01-17 00:43:22.160609 | orchestrator | 2026-01-17 00:43:22.160620 | orchestrator | TASK [Print WAL devices] ******************************************************* 2026-01-17 00:43:22.160630 | orchestrator | Saturday 17 January 2026 00:43:19 +0000 (0:00:00.130) 0:00:12.121 ****** 2026-01-17 00:43:22.160641 | orchestrator | skipping: [testbed-node-3] 2026-01-17 00:43:22.160651 | orchestrator | 2026-01-17 00:43:22.160662 | orchestrator | TASK [Print DB devices] ******************************************************** 2026-01-17 00:43:22.160673 | orchestrator | Saturday 17 January 2026 00:43:19 +0000 (0:00:00.116) 0:00:12.238 ****** 2026-01-17 00:43:22.160683 | orchestrator | skipping: [testbed-node-3] 2026-01-17 00:43:22.160694 | orchestrator | 2026-01-17 00:43:22.160704 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2026-01-17 00:43:22.160715 | orchestrator | Saturday 17 January 2026 00:43:19 +0000 (0:00:00.119) 0:00:12.357 ****** 2026-01-17 00:43:22.160726 | orchestrator | skipping: [testbed-node-3] 2026-01-17 00:43:22.160736 | orchestrator | 2026-01-17 00:43:22.160747 | orchestrator | TASK [Print configuration data] ************************************************ 2026-01-17 00:43:22.160758 | orchestrator | Saturday 17 January 2026 00:43:19 +0000 (0:00:00.115) 0:00:12.473 ****** 2026-01-17 00:43:22.160768 | orchestrator | changed: [testbed-node-3] => { 2026-01-17 00:43:22.160779 | orchestrator |  "_ceph_configure_lvm_config_data": { 2026-01-17 00:43:22.160789 | orchestrator |  "ceph_osd_devices": { 2026-01-17 00:43:22.160800 | orchestrator |  "sdb": { 2026-01-17 00:43:22.160811 | orchestrator |  "osd_lvm_uuid": "c5f49b22-d40f-5ab7-98f7-9762e23da2c0" 2026-01-17 00:43:22.160821 | orchestrator |  }, 2026-01-17 00:43:22.160832 | orchestrator |  "sdc": { 2026-01-17 00:43:22.160843 | orchestrator |  "osd_lvm_uuid": "2051e43b-6678-567a-85ad-b7e1187d21ae" 2026-01-17 00:43:22.160853 | orchestrator |  } 2026-01-17 00:43:22.160864 | orchestrator |  }, 2026-01-17 00:43:22.160874 | orchestrator |  "lvm_volumes": [ 2026-01-17 00:43:22.160885 | orchestrator |  { 2026-01-17 00:43:22.160896 | orchestrator |  "data": "osd-block-c5f49b22-d40f-5ab7-98f7-9762e23da2c0", 2026-01-17 00:43:22.160907 | orchestrator |  "data_vg": "ceph-c5f49b22-d40f-5ab7-98f7-9762e23da2c0" 2026-01-17 00:43:22.160917 | orchestrator |  }, 2026-01-17 00:43:22.160928 | orchestrator |  { 2026-01-17 00:43:22.160938 | orchestrator |  "data": "osd-block-2051e43b-6678-567a-85ad-b7e1187d21ae", 2026-01-17 00:43:22.160949 | orchestrator |  "data_vg": "ceph-2051e43b-6678-567a-85ad-b7e1187d21ae" 2026-01-17 00:43:22.160965 | orchestrator |  } 2026-01-17 00:43:22.160976 | orchestrator |  ] 2026-01-17 00:43:22.160987 | orchestrator |  } 2026-01-17 00:43:22.161004 | orchestrator | } 2026-01-17 00:43:22.161015 | orchestrator | 2026-01-17 00:43:22.161026 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2026-01-17 00:43:22.161036 | orchestrator | Saturday 17 January 2026 00:43:20 +0000 (0:00:00.324) 0:00:12.797 ****** 2026-01-17 00:43:22.161047 | orchestrator | changed: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-01-17 00:43:22.161075 | orchestrator | 2026-01-17 00:43:22.161087 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2026-01-17 00:43:22.161097 | orchestrator | 2026-01-17 00:43:22.161108 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-01-17 00:43:22.161119 | orchestrator | Saturday 17 January 2026 00:43:21 +0000 (0:00:01.500) 0:00:14.298 ****** 2026-01-17 00:43:22.161129 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2026-01-17 00:43:22.161140 | orchestrator | 2026-01-17 00:43:22.161150 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-01-17 00:43:22.161161 | orchestrator | Saturday 17 January 2026 00:43:21 +0000 (0:00:00.231) 0:00:14.529 ****** 2026-01-17 00:43:22.161172 | orchestrator | ok: [testbed-node-4] 2026-01-17 00:43:22.161182 | orchestrator | 2026-01-17 00:43:22.161200 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-17 00:43:30.039211 | orchestrator | Saturday 17 January 2026 00:43:22 +0000 (0:00:00.225) 0:00:14.754 ****** 2026-01-17 00:43:30.039288 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop0) 2026-01-17 00:43:30.039295 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop1) 2026-01-17 00:43:30.039299 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop2) 2026-01-17 00:43:30.039304 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop3) 2026-01-17 00:43:30.039308 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop4) 2026-01-17 00:43:30.039311 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop5) 2026-01-17 00:43:30.039315 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop6) 2026-01-17 00:43:30.039319 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop7) 2026-01-17 00:43:30.039323 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sda) 2026-01-17 00:43:30.039327 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdb) 2026-01-17 00:43:30.039331 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdc) 2026-01-17 00:43:30.039337 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdd) 2026-01-17 00:43:30.039341 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sr0) 2026-01-17 00:43:30.039345 | orchestrator | 2026-01-17 00:43:30.039350 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-17 00:43:30.039354 | orchestrator | Saturday 17 January 2026 00:43:22 +0000 (0:00:00.335) 0:00:15.090 ****** 2026-01-17 00:43:30.039358 | orchestrator | skipping: [testbed-node-4] 2026-01-17 00:43:30.039363 | orchestrator | 2026-01-17 00:43:30.039367 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-17 00:43:30.039370 | orchestrator | Saturday 17 January 2026 00:43:22 +0000 (0:00:00.149) 0:00:15.240 ****** 2026-01-17 00:43:30.039374 | orchestrator | skipping: [testbed-node-4] 2026-01-17 00:43:30.039378 | orchestrator | 2026-01-17 00:43:30.039382 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-17 00:43:30.039385 | orchestrator | Saturday 17 January 2026 00:43:22 +0000 (0:00:00.244) 0:00:15.485 ****** 2026-01-17 00:43:30.039390 | orchestrator | skipping: [testbed-node-4] 2026-01-17 00:43:30.039394 | orchestrator | 2026-01-17 00:43:30.039397 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-17 00:43:30.039418 | orchestrator | Saturday 17 January 2026 00:43:23 +0000 (0:00:00.173) 0:00:15.658 ****** 2026-01-17 00:43:30.039422 | orchestrator | skipping: [testbed-node-4] 2026-01-17 00:43:30.039426 | orchestrator | 2026-01-17 00:43:30.039430 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-17 00:43:30.039434 | orchestrator | Saturday 17 January 2026 00:43:23 +0000 (0:00:00.155) 0:00:15.813 ****** 2026-01-17 00:43:30.039437 | orchestrator | skipping: [testbed-node-4] 2026-01-17 00:43:30.039441 | orchestrator | 2026-01-17 00:43:30.039445 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-17 00:43:30.039448 | orchestrator | Saturday 17 January 2026 00:43:23 +0000 (0:00:00.564) 0:00:16.377 ****** 2026-01-17 00:43:30.039452 | orchestrator | skipping: [testbed-node-4] 2026-01-17 00:43:30.039456 | orchestrator | 2026-01-17 00:43:30.039473 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-17 00:43:30.039477 | orchestrator | Saturday 17 January 2026 00:43:23 +0000 (0:00:00.190) 0:00:16.568 ****** 2026-01-17 00:43:30.039480 | orchestrator | skipping: [testbed-node-4] 2026-01-17 00:43:30.039484 | orchestrator | 2026-01-17 00:43:30.039488 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-17 00:43:30.039492 | orchestrator | Saturday 17 January 2026 00:43:24 +0000 (0:00:00.197) 0:00:16.766 ****** 2026-01-17 00:43:30.039495 | orchestrator | skipping: [testbed-node-4] 2026-01-17 00:43:30.039499 | orchestrator | 2026-01-17 00:43:30.039503 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-17 00:43:30.039507 | orchestrator | Saturday 17 January 2026 00:43:24 +0000 (0:00:00.168) 0:00:16.935 ****** 2026-01-17 00:43:30.039510 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_40676af1-bb63-41c0-bff5-9ddc0a326d9b) 2026-01-17 00:43:30.039515 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_40676af1-bb63-41c0-bff5-9ddc0a326d9b) 2026-01-17 00:43:30.039519 | orchestrator | 2026-01-17 00:43:30.039523 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-17 00:43:30.039527 | orchestrator | Saturday 17 January 2026 00:43:24 +0000 (0:00:00.410) 0:00:17.345 ****** 2026-01-17 00:43:30.039531 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_89953a4d-629d-4187-87cb-8eaa4172afa2) 2026-01-17 00:43:30.039535 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_89953a4d-629d-4187-87cb-8eaa4172afa2) 2026-01-17 00:43:30.039538 | orchestrator | 2026-01-17 00:43:30.039542 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-17 00:43:30.039546 | orchestrator | Saturday 17 January 2026 00:43:25 +0000 (0:00:00.332) 0:00:17.678 ****** 2026-01-17 00:43:30.039550 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_bd9e2794-f462-41d3-bb22-ac4c4b73281f) 2026-01-17 00:43:30.039554 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_bd9e2794-f462-41d3-bb22-ac4c4b73281f) 2026-01-17 00:43:30.039558 | orchestrator | 2026-01-17 00:43:30.039561 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-17 00:43:30.039575 | orchestrator | Saturday 17 January 2026 00:43:25 +0000 (0:00:00.397) 0:00:18.075 ****** 2026-01-17 00:43:30.039579 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_1215eb05-d4be-4bfd-8c82-e464703dc233) 2026-01-17 00:43:30.039583 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_1215eb05-d4be-4bfd-8c82-e464703dc233) 2026-01-17 00:43:30.039587 | orchestrator | 2026-01-17 00:43:30.039591 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-17 00:43:30.039595 | orchestrator | Saturday 17 January 2026 00:43:25 +0000 (0:00:00.434) 0:00:18.510 ****** 2026-01-17 00:43:30.039598 | orchestrator | ok: [testbed-node-4] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-01-17 00:43:30.039603 | orchestrator | 2026-01-17 00:43:30.039606 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-17 00:43:30.039610 | orchestrator | Saturday 17 January 2026 00:43:26 +0000 (0:00:00.343) 0:00:18.853 ****** 2026-01-17 00:43:30.039618 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop0) 2026-01-17 00:43:30.039622 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop1) 2026-01-17 00:43:30.039625 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop2) 2026-01-17 00:43:30.039629 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop3) 2026-01-17 00:43:30.039633 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop4) 2026-01-17 00:43:30.039636 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop5) 2026-01-17 00:43:30.039640 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop6) 2026-01-17 00:43:30.039644 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop7) 2026-01-17 00:43:30.039647 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sda) 2026-01-17 00:43:30.039651 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdb) 2026-01-17 00:43:30.039655 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdc) 2026-01-17 00:43:30.039658 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdd) 2026-01-17 00:43:30.039662 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sr0) 2026-01-17 00:43:30.039666 | orchestrator | 2026-01-17 00:43:30.039669 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-17 00:43:30.039673 | orchestrator | Saturday 17 January 2026 00:43:26 +0000 (0:00:00.430) 0:00:19.284 ****** 2026-01-17 00:43:30.039677 | orchestrator | skipping: [testbed-node-4] 2026-01-17 00:43:30.039681 | orchestrator | 2026-01-17 00:43:30.039684 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-17 00:43:30.039692 | orchestrator | Saturday 17 January 2026 00:43:27 +0000 (0:00:00.726) 0:00:20.011 ****** 2026-01-17 00:43:30.039696 | orchestrator | skipping: [testbed-node-4] 2026-01-17 00:43:30.039699 | orchestrator | 2026-01-17 00:43:30.039703 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-17 00:43:30.039707 | orchestrator | Saturday 17 January 2026 00:43:27 +0000 (0:00:00.256) 0:00:20.267 ****** 2026-01-17 00:43:30.039711 | orchestrator | skipping: [testbed-node-4] 2026-01-17 00:43:30.039714 | orchestrator | 2026-01-17 00:43:30.039718 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-17 00:43:30.039722 | orchestrator | Saturday 17 January 2026 00:43:27 +0000 (0:00:00.202) 0:00:20.469 ****** 2026-01-17 00:43:30.039726 | orchestrator | skipping: [testbed-node-4] 2026-01-17 00:43:30.039729 | orchestrator | 2026-01-17 00:43:30.039733 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-17 00:43:30.039737 | orchestrator | Saturday 17 January 2026 00:43:28 +0000 (0:00:00.200) 0:00:20.670 ****** 2026-01-17 00:43:30.039741 | orchestrator | skipping: [testbed-node-4] 2026-01-17 00:43:30.039744 | orchestrator | 2026-01-17 00:43:30.039748 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-17 00:43:30.039752 | orchestrator | Saturday 17 January 2026 00:43:28 +0000 (0:00:00.183) 0:00:20.853 ****** 2026-01-17 00:43:30.039756 | orchestrator | skipping: [testbed-node-4] 2026-01-17 00:43:30.039759 | orchestrator | 2026-01-17 00:43:30.039763 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-17 00:43:30.039767 | orchestrator | Saturday 17 January 2026 00:43:28 +0000 (0:00:00.196) 0:00:21.050 ****** 2026-01-17 00:43:30.039771 | orchestrator | skipping: [testbed-node-4] 2026-01-17 00:43:30.039775 | orchestrator | 2026-01-17 00:43:30.039780 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-17 00:43:30.039784 | orchestrator | Saturday 17 January 2026 00:43:28 +0000 (0:00:00.197) 0:00:21.247 ****** 2026-01-17 00:43:30.039792 | orchestrator | skipping: [testbed-node-4] 2026-01-17 00:43:30.039796 | orchestrator | 2026-01-17 00:43:30.039801 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-17 00:43:30.039805 | orchestrator | Saturday 17 January 2026 00:43:28 +0000 (0:00:00.207) 0:00:21.454 ****** 2026-01-17 00:43:30.039809 | orchestrator | ok: [testbed-node-4] => (item=sda1) 2026-01-17 00:43:30.039815 | orchestrator | ok: [testbed-node-4] => (item=sda14) 2026-01-17 00:43:30.039820 | orchestrator | ok: [testbed-node-4] => (item=sda15) 2026-01-17 00:43:30.039824 | orchestrator | ok: [testbed-node-4] => (item=sda16) 2026-01-17 00:43:30.039828 | orchestrator | 2026-01-17 00:43:30.039833 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-17 00:43:30.039837 | orchestrator | Saturday 17 January 2026 00:43:29 +0000 (0:00:00.972) 0:00:22.427 ****** 2026-01-17 00:43:30.039841 | orchestrator | skipping: [testbed-node-4] 2026-01-17 00:43:36.259420 | orchestrator | 2026-01-17 00:43:36.259540 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-17 00:43:36.259558 | orchestrator | Saturday 17 January 2026 00:43:30 +0000 (0:00:00.206) 0:00:22.633 ****** 2026-01-17 00:43:36.259570 | orchestrator | skipping: [testbed-node-4] 2026-01-17 00:43:36.259596 | orchestrator | 2026-01-17 00:43:36.259648 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-17 00:43:36.259662 | orchestrator | Saturday 17 January 2026 00:43:30 +0000 (0:00:00.206) 0:00:22.840 ****** 2026-01-17 00:43:36.259674 | orchestrator | skipping: [testbed-node-4] 2026-01-17 00:43:36.259685 | orchestrator | 2026-01-17 00:43:36.259696 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-17 00:43:36.259707 | orchestrator | Saturday 17 January 2026 00:43:30 +0000 (0:00:00.285) 0:00:23.126 ****** 2026-01-17 00:43:36.259718 | orchestrator | skipping: [testbed-node-4] 2026-01-17 00:43:36.259729 | orchestrator | 2026-01-17 00:43:36.259740 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2026-01-17 00:43:36.259751 | orchestrator | Saturday 17 January 2026 00:43:31 +0000 (0:00:00.711) 0:00:23.838 ****** 2026-01-17 00:43:36.259762 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': None}) 2026-01-17 00:43:36.259773 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': None}) 2026-01-17 00:43:36.259783 | orchestrator | 2026-01-17 00:43:36.259794 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2026-01-17 00:43:36.259805 | orchestrator | Saturday 17 January 2026 00:43:31 +0000 (0:00:00.184) 0:00:24.022 ****** 2026-01-17 00:43:36.259815 | orchestrator | skipping: [testbed-node-4] 2026-01-17 00:43:36.259827 | orchestrator | 2026-01-17 00:43:36.259838 | orchestrator | TASK [Generate DB VG names] **************************************************** 2026-01-17 00:43:36.259849 | orchestrator | Saturday 17 January 2026 00:43:31 +0000 (0:00:00.128) 0:00:24.151 ****** 2026-01-17 00:43:36.259860 | orchestrator | skipping: [testbed-node-4] 2026-01-17 00:43:36.259870 | orchestrator | 2026-01-17 00:43:36.259881 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2026-01-17 00:43:36.259892 | orchestrator | Saturday 17 January 2026 00:43:31 +0000 (0:00:00.151) 0:00:24.302 ****** 2026-01-17 00:43:36.259903 | orchestrator | skipping: [testbed-node-4] 2026-01-17 00:43:36.259913 | orchestrator | 2026-01-17 00:43:36.259924 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2026-01-17 00:43:36.259935 | orchestrator | Saturday 17 January 2026 00:43:31 +0000 (0:00:00.142) 0:00:24.445 ****** 2026-01-17 00:43:36.259946 | orchestrator | ok: [testbed-node-4] 2026-01-17 00:43:36.259958 | orchestrator | 2026-01-17 00:43:36.259968 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2026-01-17 00:43:36.259979 | orchestrator | Saturday 17 January 2026 00:43:32 +0000 (0:00:00.154) 0:00:24.599 ****** 2026-01-17 00:43:36.259991 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '6f2a493f-ee42-5e89-bc68-fb4f7dc1b165'}}) 2026-01-17 00:43:36.260002 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'fbc9b557-fafa-5136-b4c6-7d286dd557bb'}}) 2026-01-17 00:43:36.260040 | orchestrator | 2026-01-17 00:43:36.260051 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2026-01-17 00:43:36.260062 | orchestrator | Saturday 17 January 2026 00:43:32 +0000 (0:00:00.202) 0:00:24.802 ****** 2026-01-17 00:43:36.260133 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '6f2a493f-ee42-5e89-bc68-fb4f7dc1b165'}})  2026-01-17 00:43:36.260168 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'fbc9b557-fafa-5136-b4c6-7d286dd557bb'}})  2026-01-17 00:43:36.260180 | orchestrator | skipping: [testbed-node-4] 2026-01-17 00:43:36.260191 | orchestrator | 2026-01-17 00:43:36.260202 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2026-01-17 00:43:36.260213 | orchestrator | Saturday 17 January 2026 00:43:32 +0000 (0:00:00.182) 0:00:24.985 ****** 2026-01-17 00:43:36.260224 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '6f2a493f-ee42-5e89-bc68-fb4f7dc1b165'}})  2026-01-17 00:43:36.260234 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'fbc9b557-fafa-5136-b4c6-7d286dd557bb'}})  2026-01-17 00:43:36.260245 | orchestrator | skipping: [testbed-node-4] 2026-01-17 00:43:36.260256 | orchestrator | 2026-01-17 00:43:36.260267 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2026-01-17 00:43:36.260277 | orchestrator | Saturday 17 January 2026 00:43:32 +0000 (0:00:00.181) 0:00:25.166 ****** 2026-01-17 00:43:36.260289 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '6f2a493f-ee42-5e89-bc68-fb4f7dc1b165'}})  2026-01-17 00:43:36.260300 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'fbc9b557-fafa-5136-b4c6-7d286dd557bb'}})  2026-01-17 00:43:36.260311 | orchestrator | skipping: [testbed-node-4] 2026-01-17 00:43:36.260322 | orchestrator | 2026-01-17 00:43:36.260332 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2026-01-17 00:43:36.260343 | orchestrator | Saturday 17 January 2026 00:43:32 +0000 (0:00:00.142) 0:00:25.308 ****** 2026-01-17 00:43:36.260354 | orchestrator | ok: [testbed-node-4] 2026-01-17 00:43:36.260365 | orchestrator | 2026-01-17 00:43:36.260375 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2026-01-17 00:43:36.260386 | orchestrator | Saturday 17 January 2026 00:43:32 +0000 (0:00:00.105) 0:00:25.414 ****** 2026-01-17 00:43:36.260397 | orchestrator | ok: [testbed-node-4] 2026-01-17 00:43:36.260407 | orchestrator | 2026-01-17 00:43:36.260418 | orchestrator | TASK [Set DB devices config data] ********************************************** 2026-01-17 00:43:36.260429 | orchestrator | Saturday 17 January 2026 00:43:32 +0000 (0:00:00.107) 0:00:25.522 ****** 2026-01-17 00:43:36.260459 | orchestrator | skipping: [testbed-node-4] 2026-01-17 00:43:36.260470 | orchestrator | 2026-01-17 00:43:36.260482 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2026-01-17 00:43:36.260492 | orchestrator | Saturday 17 January 2026 00:43:33 +0000 (0:00:00.256) 0:00:25.778 ****** 2026-01-17 00:43:36.260503 | orchestrator | skipping: [testbed-node-4] 2026-01-17 00:43:36.260514 | orchestrator | 2026-01-17 00:43:36.260525 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2026-01-17 00:43:36.260536 | orchestrator | Saturday 17 January 2026 00:43:33 +0000 (0:00:00.113) 0:00:25.891 ****** 2026-01-17 00:43:36.260546 | orchestrator | skipping: [testbed-node-4] 2026-01-17 00:43:36.260557 | orchestrator | 2026-01-17 00:43:36.260568 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2026-01-17 00:43:36.260578 | orchestrator | Saturday 17 January 2026 00:43:33 +0000 (0:00:00.117) 0:00:26.008 ****** 2026-01-17 00:43:36.260589 | orchestrator | ok: [testbed-node-4] => { 2026-01-17 00:43:36.260600 | orchestrator |  "ceph_osd_devices": { 2026-01-17 00:43:36.260611 | orchestrator |  "sdb": { 2026-01-17 00:43:36.260623 | orchestrator |  "osd_lvm_uuid": "6f2a493f-ee42-5e89-bc68-fb4f7dc1b165" 2026-01-17 00:43:36.260644 | orchestrator |  }, 2026-01-17 00:43:36.260655 | orchestrator |  "sdc": { 2026-01-17 00:43:36.260666 | orchestrator |  "osd_lvm_uuid": "fbc9b557-fafa-5136-b4c6-7d286dd557bb" 2026-01-17 00:43:36.260676 | orchestrator |  } 2026-01-17 00:43:36.260687 | orchestrator |  } 2026-01-17 00:43:36.260698 | orchestrator | } 2026-01-17 00:43:36.260710 | orchestrator | 2026-01-17 00:43:36.260721 | orchestrator | TASK [Print WAL devices] ******************************************************* 2026-01-17 00:43:36.260731 | orchestrator | Saturday 17 January 2026 00:43:33 +0000 (0:00:00.142) 0:00:26.151 ****** 2026-01-17 00:43:36.260742 | orchestrator | skipping: [testbed-node-4] 2026-01-17 00:43:36.260752 | orchestrator | 2026-01-17 00:43:36.260763 | orchestrator | TASK [Print DB devices] ******************************************************** 2026-01-17 00:43:36.260774 | orchestrator | Saturday 17 January 2026 00:43:33 +0000 (0:00:00.107) 0:00:26.259 ****** 2026-01-17 00:43:36.260784 | orchestrator | skipping: [testbed-node-4] 2026-01-17 00:43:36.260795 | orchestrator | 2026-01-17 00:43:36.260806 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2026-01-17 00:43:36.260816 | orchestrator | Saturday 17 January 2026 00:43:33 +0000 (0:00:00.113) 0:00:26.372 ****** 2026-01-17 00:43:36.260827 | orchestrator | skipping: [testbed-node-4] 2026-01-17 00:43:36.260837 | orchestrator | 2026-01-17 00:43:36.260848 | orchestrator | TASK [Print configuration data] ************************************************ 2026-01-17 00:43:36.260859 | orchestrator | Saturday 17 January 2026 00:43:33 +0000 (0:00:00.108) 0:00:26.481 ****** 2026-01-17 00:43:36.260869 | orchestrator | changed: [testbed-node-4] => { 2026-01-17 00:43:36.260881 | orchestrator |  "_ceph_configure_lvm_config_data": { 2026-01-17 00:43:36.260900 | orchestrator |  "ceph_osd_devices": { 2026-01-17 00:43:36.260925 | orchestrator |  "sdb": { 2026-01-17 00:43:36.260949 | orchestrator |  "osd_lvm_uuid": "6f2a493f-ee42-5e89-bc68-fb4f7dc1b165" 2026-01-17 00:43:36.260967 | orchestrator |  }, 2026-01-17 00:43:36.260985 | orchestrator |  "sdc": { 2026-01-17 00:43:36.261003 | orchestrator |  "osd_lvm_uuid": "fbc9b557-fafa-5136-b4c6-7d286dd557bb" 2026-01-17 00:43:36.261022 | orchestrator |  } 2026-01-17 00:43:36.261041 | orchestrator |  }, 2026-01-17 00:43:36.261059 | orchestrator |  "lvm_volumes": [ 2026-01-17 00:43:36.261106 | orchestrator |  { 2026-01-17 00:43:36.261127 | orchestrator |  "data": "osd-block-6f2a493f-ee42-5e89-bc68-fb4f7dc1b165", 2026-01-17 00:43:36.261144 | orchestrator |  "data_vg": "ceph-6f2a493f-ee42-5e89-bc68-fb4f7dc1b165" 2026-01-17 00:43:36.261160 | orchestrator |  }, 2026-01-17 00:43:36.261171 | orchestrator |  { 2026-01-17 00:43:36.261182 | orchestrator |  "data": "osd-block-fbc9b557-fafa-5136-b4c6-7d286dd557bb", 2026-01-17 00:43:36.261193 | orchestrator |  "data_vg": "ceph-fbc9b557-fafa-5136-b4c6-7d286dd557bb" 2026-01-17 00:43:36.261204 | orchestrator |  } 2026-01-17 00:43:36.261214 | orchestrator |  ] 2026-01-17 00:43:36.261225 | orchestrator |  } 2026-01-17 00:43:36.261236 | orchestrator | } 2026-01-17 00:43:36.261247 | orchestrator | 2026-01-17 00:43:36.261258 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2026-01-17 00:43:36.261269 | orchestrator | Saturday 17 January 2026 00:43:34 +0000 (0:00:00.181) 0:00:26.663 ****** 2026-01-17 00:43:36.261280 | orchestrator | changed: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2026-01-17 00:43:36.261290 | orchestrator | 2026-01-17 00:43:36.261301 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2026-01-17 00:43:36.261312 | orchestrator | 2026-01-17 00:43:36.261323 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-01-17 00:43:36.261334 | orchestrator | Saturday 17 January 2026 00:43:35 +0000 (0:00:00.984) 0:00:27.648 ****** 2026-01-17 00:43:36.261344 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2026-01-17 00:43:36.261355 | orchestrator | 2026-01-17 00:43:36.261367 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-01-17 00:43:36.261397 | orchestrator | Saturday 17 January 2026 00:43:35 +0000 (0:00:00.606) 0:00:28.255 ****** 2026-01-17 00:43:36.261409 | orchestrator | ok: [testbed-node-5] 2026-01-17 00:43:36.261420 | orchestrator | 2026-01-17 00:43:36.261431 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-17 00:43:36.261442 | orchestrator | Saturday 17 January 2026 00:43:35 +0000 (0:00:00.245) 0:00:28.500 ****** 2026-01-17 00:43:36.261452 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop0) 2026-01-17 00:43:36.261463 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop1) 2026-01-17 00:43:36.261474 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop2) 2026-01-17 00:43:36.261485 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop3) 2026-01-17 00:43:36.261496 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop4) 2026-01-17 00:43:36.261517 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop5) 2026-01-17 00:43:44.613284 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop6) 2026-01-17 00:43:44.613360 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop7) 2026-01-17 00:43:44.613368 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sda) 2026-01-17 00:43:44.613373 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdb) 2026-01-17 00:43:44.613379 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdc) 2026-01-17 00:43:44.613384 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdd) 2026-01-17 00:43:44.613389 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sr0) 2026-01-17 00:43:44.613394 | orchestrator | 2026-01-17 00:43:44.613400 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-17 00:43:44.613406 | orchestrator | Saturday 17 January 2026 00:43:36 +0000 (0:00:00.344) 0:00:28.844 ****** 2026-01-17 00:43:44.613411 | orchestrator | skipping: [testbed-node-5] 2026-01-17 00:43:44.613416 | orchestrator | 2026-01-17 00:43:44.613421 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-17 00:43:44.613426 | orchestrator | Saturday 17 January 2026 00:43:36 +0000 (0:00:00.199) 0:00:29.044 ****** 2026-01-17 00:43:44.613431 | orchestrator | skipping: [testbed-node-5] 2026-01-17 00:43:44.613436 | orchestrator | 2026-01-17 00:43:44.613441 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-17 00:43:44.613446 | orchestrator | Saturday 17 January 2026 00:43:36 +0000 (0:00:00.189) 0:00:29.233 ****** 2026-01-17 00:43:44.613451 | orchestrator | skipping: [testbed-node-5] 2026-01-17 00:43:44.613456 | orchestrator | 2026-01-17 00:43:44.613461 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-17 00:43:44.613465 | orchestrator | Saturday 17 January 2026 00:43:36 +0000 (0:00:00.197) 0:00:29.431 ****** 2026-01-17 00:43:44.613470 | orchestrator | skipping: [testbed-node-5] 2026-01-17 00:43:44.613475 | orchestrator | 2026-01-17 00:43:44.613480 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-17 00:43:44.613485 | orchestrator | Saturday 17 January 2026 00:43:37 +0000 (0:00:00.202) 0:00:29.634 ****** 2026-01-17 00:43:44.613490 | orchestrator | skipping: [testbed-node-5] 2026-01-17 00:43:44.613495 | orchestrator | 2026-01-17 00:43:44.613499 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-17 00:43:44.613504 | orchestrator | Saturday 17 January 2026 00:43:37 +0000 (0:00:00.205) 0:00:29.839 ****** 2026-01-17 00:43:44.613509 | orchestrator | skipping: [testbed-node-5] 2026-01-17 00:43:44.613514 | orchestrator | 2026-01-17 00:43:44.613519 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-17 00:43:44.613541 | orchestrator | Saturday 17 January 2026 00:43:37 +0000 (0:00:00.219) 0:00:30.059 ****** 2026-01-17 00:43:44.613547 | orchestrator | skipping: [testbed-node-5] 2026-01-17 00:43:44.613551 | orchestrator | 2026-01-17 00:43:44.613557 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-17 00:43:44.613561 | orchestrator | Saturday 17 January 2026 00:43:37 +0000 (0:00:00.202) 0:00:30.261 ****** 2026-01-17 00:43:44.613566 | orchestrator | skipping: [testbed-node-5] 2026-01-17 00:43:44.613571 | orchestrator | 2026-01-17 00:43:44.613576 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-17 00:43:44.613581 | orchestrator | Saturday 17 January 2026 00:43:37 +0000 (0:00:00.202) 0:00:30.464 ****** 2026-01-17 00:43:44.613586 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_4a12610b-6fe3-4cad-9944-f8a257ec9d82) 2026-01-17 00:43:44.613592 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_4a12610b-6fe3-4cad-9944-f8a257ec9d82) 2026-01-17 00:43:44.613597 | orchestrator | 2026-01-17 00:43:44.613602 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-17 00:43:44.613607 | orchestrator | Saturday 17 January 2026 00:43:38 +0000 (0:00:00.903) 0:00:31.368 ****** 2026-01-17 00:43:44.613611 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_653651ff-f0c3-4f93-a415-b7bde2938506) 2026-01-17 00:43:44.613616 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_653651ff-f0c3-4f93-a415-b7bde2938506) 2026-01-17 00:43:44.613621 | orchestrator | 2026-01-17 00:43:44.613626 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-17 00:43:44.613631 | orchestrator | Saturday 17 January 2026 00:43:39 +0000 (0:00:00.429) 0:00:31.798 ****** 2026-01-17 00:43:44.613636 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_3748448b-4cb4-41ff-a93c-c2a900d49ce0) 2026-01-17 00:43:44.613641 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_3748448b-4cb4-41ff-a93c-c2a900d49ce0) 2026-01-17 00:43:44.613645 | orchestrator | 2026-01-17 00:43:44.613650 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-17 00:43:44.613655 | orchestrator | Saturday 17 January 2026 00:43:39 +0000 (0:00:00.463) 0:00:32.262 ****** 2026-01-17 00:43:44.613660 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_b2725b1a-ab02-479a-b1d7-829717bc50e1) 2026-01-17 00:43:44.613665 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_b2725b1a-ab02-479a-b1d7-829717bc50e1) 2026-01-17 00:43:44.613670 | orchestrator | 2026-01-17 00:43:44.613675 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-17 00:43:44.613679 | orchestrator | Saturday 17 January 2026 00:43:40 +0000 (0:00:00.469) 0:00:32.732 ****** 2026-01-17 00:43:44.613684 | orchestrator | ok: [testbed-node-5] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-01-17 00:43:44.613689 | orchestrator | 2026-01-17 00:43:44.613694 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-17 00:43:44.613710 | orchestrator | Saturday 17 January 2026 00:43:40 +0000 (0:00:00.363) 0:00:33.095 ****** 2026-01-17 00:43:44.613715 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop0) 2026-01-17 00:43:44.613720 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop1) 2026-01-17 00:43:44.613725 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop2) 2026-01-17 00:43:44.613729 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop3) 2026-01-17 00:43:44.613734 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop4) 2026-01-17 00:43:44.613751 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop5) 2026-01-17 00:43:44.613756 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop6) 2026-01-17 00:43:44.613761 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop7) 2026-01-17 00:43:44.613771 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sda) 2026-01-17 00:43:44.613776 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdb) 2026-01-17 00:43:44.613781 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdc) 2026-01-17 00:43:44.613785 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdd) 2026-01-17 00:43:44.613790 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sr0) 2026-01-17 00:43:44.613795 | orchestrator | 2026-01-17 00:43:44.613800 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-17 00:43:44.613805 | orchestrator | Saturday 17 January 2026 00:43:40 +0000 (0:00:00.370) 0:00:33.466 ****** 2026-01-17 00:43:44.613809 | orchestrator | skipping: [testbed-node-5] 2026-01-17 00:43:44.613814 | orchestrator | 2026-01-17 00:43:44.613819 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-17 00:43:44.613824 | orchestrator | Saturday 17 January 2026 00:43:41 +0000 (0:00:00.209) 0:00:33.676 ****** 2026-01-17 00:43:44.613828 | orchestrator | skipping: [testbed-node-5] 2026-01-17 00:43:44.613833 | orchestrator | 2026-01-17 00:43:44.613838 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-17 00:43:44.613846 | orchestrator | Saturday 17 January 2026 00:43:41 +0000 (0:00:00.216) 0:00:33.892 ****** 2026-01-17 00:43:44.613851 | orchestrator | skipping: [testbed-node-5] 2026-01-17 00:43:44.613856 | orchestrator | 2026-01-17 00:43:44.613861 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-17 00:43:44.613865 | orchestrator | Saturday 17 January 2026 00:43:41 +0000 (0:00:00.204) 0:00:34.097 ****** 2026-01-17 00:43:44.613870 | orchestrator | skipping: [testbed-node-5] 2026-01-17 00:43:44.613875 | orchestrator | 2026-01-17 00:43:44.613880 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-17 00:43:44.613885 | orchestrator | Saturday 17 January 2026 00:43:41 +0000 (0:00:00.198) 0:00:34.295 ****** 2026-01-17 00:43:44.613889 | orchestrator | skipping: [testbed-node-5] 2026-01-17 00:43:44.613894 | orchestrator | 2026-01-17 00:43:44.613899 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-17 00:43:44.613904 | orchestrator | Saturday 17 January 2026 00:43:41 +0000 (0:00:00.196) 0:00:34.492 ****** 2026-01-17 00:43:44.613909 | orchestrator | skipping: [testbed-node-5] 2026-01-17 00:43:44.613913 | orchestrator | 2026-01-17 00:43:44.613918 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-17 00:43:44.613923 | orchestrator | Saturday 17 January 2026 00:43:42 +0000 (0:00:00.684) 0:00:35.176 ****** 2026-01-17 00:43:44.613928 | orchestrator | skipping: [testbed-node-5] 2026-01-17 00:43:44.613932 | orchestrator | 2026-01-17 00:43:44.613937 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-17 00:43:44.613942 | orchestrator | Saturday 17 January 2026 00:43:42 +0000 (0:00:00.223) 0:00:35.400 ****** 2026-01-17 00:43:44.613947 | orchestrator | skipping: [testbed-node-5] 2026-01-17 00:43:44.613952 | orchestrator | 2026-01-17 00:43:44.613956 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-17 00:43:44.613961 | orchestrator | Saturday 17 January 2026 00:43:43 +0000 (0:00:00.224) 0:00:35.625 ****** 2026-01-17 00:43:44.613966 | orchestrator | ok: [testbed-node-5] => (item=sda1) 2026-01-17 00:43:44.613971 | orchestrator | ok: [testbed-node-5] => (item=sda14) 2026-01-17 00:43:44.613976 | orchestrator | ok: [testbed-node-5] => (item=sda15) 2026-01-17 00:43:44.613981 | orchestrator | ok: [testbed-node-5] => (item=sda16) 2026-01-17 00:43:44.613986 | orchestrator | 2026-01-17 00:43:44.613990 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-17 00:43:44.613995 | orchestrator | Saturday 17 January 2026 00:43:43 +0000 (0:00:00.678) 0:00:36.303 ****** 2026-01-17 00:43:44.614000 | orchestrator | skipping: [testbed-node-5] 2026-01-17 00:43:44.614009 | orchestrator | 2026-01-17 00:43:44.614054 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-17 00:43:44.614060 | orchestrator | Saturday 17 January 2026 00:43:43 +0000 (0:00:00.211) 0:00:36.515 ****** 2026-01-17 00:43:44.614065 | orchestrator | skipping: [testbed-node-5] 2026-01-17 00:43:44.614070 | orchestrator | 2026-01-17 00:43:44.614074 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-17 00:43:44.614079 | orchestrator | Saturday 17 January 2026 00:43:44 +0000 (0:00:00.222) 0:00:36.737 ****** 2026-01-17 00:43:44.614102 | orchestrator | skipping: [testbed-node-5] 2026-01-17 00:43:44.614107 | orchestrator | 2026-01-17 00:43:44.614112 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-17 00:43:44.614117 | orchestrator | Saturday 17 January 2026 00:43:44 +0000 (0:00:00.209) 0:00:36.946 ****** 2026-01-17 00:43:44.614122 | orchestrator | skipping: [testbed-node-5] 2026-01-17 00:43:44.614127 | orchestrator | 2026-01-17 00:43:44.614135 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2026-01-17 00:43:49.255723 | orchestrator | Saturday 17 January 2026 00:43:44 +0000 (0:00:00.257) 0:00:37.204 ****** 2026-01-17 00:43:49.255816 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': None}) 2026-01-17 00:43:49.255824 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': None}) 2026-01-17 00:43:49.255829 | orchestrator | 2026-01-17 00:43:49.255833 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2026-01-17 00:43:49.255838 | orchestrator | Saturday 17 January 2026 00:43:44 +0000 (0:00:00.172) 0:00:37.376 ****** 2026-01-17 00:43:49.255842 | orchestrator | skipping: [testbed-node-5] 2026-01-17 00:43:49.255846 | orchestrator | 2026-01-17 00:43:49.255850 | orchestrator | TASK [Generate DB VG names] **************************************************** 2026-01-17 00:43:49.255854 | orchestrator | Saturday 17 January 2026 00:43:44 +0000 (0:00:00.138) 0:00:37.515 ****** 2026-01-17 00:43:49.255858 | orchestrator | skipping: [testbed-node-5] 2026-01-17 00:43:49.255862 | orchestrator | 2026-01-17 00:43:49.255865 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2026-01-17 00:43:49.255869 | orchestrator | Saturday 17 January 2026 00:43:45 +0000 (0:00:00.143) 0:00:37.659 ****** 2026-01-17 00:43:49.255873 | orchestrator | skipping: [testbed-node-5] 2026-01-17 00:43:49.255876 | orchestrator | 2026-01-17 00:43:49.255880 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2026-01-17 00:43:49.255884 | orchestrator | Saturday 17 January 2026 00:43:45 +0000 (0:00:00.366) 0:00:38.025 ****** 2026-01-17 00:43:49.255888 | orchestrator | ok: [testbed-node-5] 2026-01-17 00:43:49.255892 | orchestrator | 2026-01-17 00:43:49.255897 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2026-01-17 00:43:49.255901 | orchestrator | Saturday 17 January 2026 00:43:45 +0000 (0:00:00.127) 0:00:38.152 ****** 2026-01-17 00:43:49.255905 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'a3dfbdd8-de3c-56f7-9997-9a9b5f483001'}}) 2026-01-17 00:43:49.255910 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '68934a0c-2b18-58d2-8851-459d4d664360'}}) 2026-01-17 00:43:49.255913 | orchestrator | 2026-01-17 00:43:49.255917 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2026-01-17 00:43:49.255921 | orchestrator | Saturday 17 January 2026 00:43:45 +0000 (0:00:00.177) 0:00:38.330 ****** 2026-01-17 00:43:49.255925 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'a3dfbdd8-de3c-56f7-9997-9a9b5f483001'}})  2026-01-17 00:43:49.255931 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '68934a0c-2b18-58d2-8851-459d4d664360'}})  2026-01-17 00:43:49.255935 | orchestrator | skipping: [testbed-node-5] 2026-01-17 00:43:49.255938 | orchestrator | 2026-01-17 00:43:49.255943 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2026-01-17 00:43:49.255947 | orchestrator | Saturday 17 January 2026 00:43:45 +0000 (0:00:00.170) 0:00:38.500 ****** 2026-01-17 00:43:49.255967 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'a3dfbdd8-de3c-56f7-9997-9a9b5f483001'}})  2026-01-17 00:43:49.255971 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '68934a0c-2b18-58d2-8851-459d4d664360'}})  2026-01-17 00:43:49.255975 | orchestrator | skipping: [testbed-node-5] 2026-01-17 00:43:49.255979 | orchestrator | 2026-01-17 00:43:49.255983 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2026-01-17 00:43:49.255986 | orchestrator | Saturday 17 January 2026 00:43:46 +0000 (0:00:00.182) 0:00:38.683 ****** 2026-01-17 00:43:49.256003 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'a3dfbdd8-de3c-56f7-9997-9a9b5f483001'}})  2026-01-17 00:43:49.256007 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '68934a0c-2b18-58d2-8851-459d4d664360'}})  2026-01-17 00:43:49.256011 | orchestrator | skipping: [testbed-node-5] 2026-01-17 00:43:49.256015 | orchestrator | 2026-01-17 00:43:49.256018 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2026-01-17 00:43:49.256022 | orchestrator | Saturday 17 January 2026 00:43:46 +0000 (0:00:00.148) 0:00:38.831 ****** 2026-01-17 00:43:49.256026 | orchestrator | ok: [testbed-node-5] 2026-01-17 00:43:49.256030 | orchestrator | 2026-01-17 00:43:49.256035 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2026-01-17 00:43:49.256041 | orchestrator | Saturday 17 January 2026 00:43:46 +0000 (0:00:00.179) 0:00:39.011 ****** 2026-01-17 00:43:49.256047 | orchestrator | ok: [testbed-node-5] 2026-01-17 00:43:49.256053 | orchestrator | 2026-01-17 00:43:49.256059 | orchestrator | TASK [Set DB devices config data] ********************************************** 2026-01-17 00:43:49.256065 | orchestrator | Saturday 17 January 2026 00:43:46 +0000 (0:00:00.196) 0:00:39.208 ****** 2026-01-17 00:43:49.256070 | orchestrator | skipping: [testbed-node-5] 2026-01-17 00:43:49.256076 | orchestrator | 2026-01-17 00:43:49.256082 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2026-01-17 00:43:49.256088 | orchestrator | Saturday 17 January 2026 00:43:46 +0000 (0:00:00.200) 0:00:39.408 ****** 2026-01-17 00:43:49.256114 | orchestrator | skipping: [testbed-node-5] 2026-01-17 00:43:49.256121 | orchestrator | 2026-01-17 00:43:49.256127 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2026-01-17 00:43:49.256132 | orchestrator | Saturday 17 January 2026 00:43:46 +0000 (0:00:00.169) 0:00:39.577 ****** 2026-01-17 00:43:49.256138 | orchestrator | skipping: [testbed-node-5] 2026-01-17 00:43:49.256144 | orchestrator | 2026-01-17 00:43:49.256150 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2026-01-17 00:43:49.256157 | orchestrator | Saturday 17 January 2026 00:43:47 +0000 (0:00:00.156) 0:00:39.734 ****** 2026-01-17 00:43:49.256163 | orchestrator | ok: [testbed-node-5] => { 2026-01-17 00:43:49.256170 | orchestrator |  "ceph_osd_devices": { 2026-01-17 00:43:49.256187 | orchestrator |  "sdb": { 2026-01-17 00:43:49.256209 | orchestrator |  "osd_lvm_uuid": "a3dfbdd8-de3c-56f7-9997-9a9b5f483001" 2026-01-17 00:43:49.256214 | orchestrator |  }, 2026-01-17 00:43:49.256218 | orchestrator |  "sdc": { 2026-01-17 00:43:49.256222 | orchestrator |  "osd_lvm_uuid": "68934a0c-2b18-58d2-8851-459d4d664360" 2026-01-17 00:43:49.256226 | orchestrator |  } 2026-01-17 00:43:49.256230 | orchestrator |  } 2026-01-17 00:43:49.256234 | orchestrator | } 2026-01-17 00:43:49.256238 | orchestrator | 2026-01-17 00:43:49.256242 | orchestrator | TASK [Print WAL devices] ******************************************************* 2026-01-17 00:43:49.256246 | orchestrator | Saturday 17 January 2026 00:43:47 +0000 (0:00:00.158) 0:00:39.892 ****** 2026-01-17 00:43:49.256250 | orchestrator | skipping: [testbed-node-5] 2026-01-17 00:43:49.256254 | orchestrator | 2026-01-17 00:43:49.256257 | orchestrator | TASK [Print DB devices] ******************************************************** 2026-01-17 00:43:49.256261 | orchestrator | Saturday 17 January 2026 00:43:47 +0000 (0:00:00.347) 0:00:40.240 ****** 2026-01-17 00:43:49.256273 | orchestrator | skipping: [testbed-node-5] 2026-01-17 00:43:49.256278 | orchestrator | 2026-01-17 00:43:49.256282 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2026-01-17 00:43:49.256286 | orchestrator | Saturday 17 January 2026 00:43:47 +0000 (0:00:00.134) 0:00:40.374 ****** 2026-01-17 00:43:49.256290 | orchestrator | skipping: [testbed-node-5] 2026-01-17 00:43:49.256295 | orchestrator | 2026-01-17 00:43:49.256299 | orchestrator | TASK [Print configuration data] ************************************************ 2026-01-17 00:43:49.256303 | orchestrator | Saturday 17 January 2026 00:43:47 +0000 (0:00:00.145) 0:00:40.520 ****** 2026-01-17 00:43:49.256307 | orchestrator | changed: [testbed-node-5] => { 2026-01-17 00:43:49.256312 | orchestrator |  "_ceph_configure_lvm_config_data": { 2026-01-17 00:43:49.256318 | orchestrator |  "ceph_osd_devices": { 2026-01-17 00:43:49.256325 | orchestrator |  "sdb": { 2026-01-17 00:43:49.256332 | orchestrator |  "osd_lvm_uuid": "a3dfbdd8-de3c-56f7-9997-9a9b5f483001" 2026-01-17 00:43:49.256339 | orchestrator |  }, 2026-01-17 00:43:49.256345 | orchestrator |  "sdc": { 2026-01-17 00:43:49.256352 | orchestrator |  "osd_lvm_uuid": "68934a0c-2b18-58d2-8851-459d4d664360" 2026-01-17 00:43:49.256358 | orchestrator |  } 2026-01-17 00:43:49.256365 | orchestrator |  }, 2026-01-17 00:43:49.256370 | orchestrator |  "lvm_volumes": [ 2026-01-17 00:43:49.256375 | orchestrator |  { 2026-01-17 00:43:49.256379 | orchestrator |  "data": "osd-block-a3dfbdd8-de3c-56f7-9997-9a9b5f483001", 2026-01-17 00:43:49.256384 | orchestrator |  "data_vg": "ceph-a3dfbdd8-de3c-56f7-9997-9a9b5f483001" 2026-01-17 00:43:49.256388 | orchestrator |  }, 2026-01-17 00:43:49.256392 | orchestrator |  { 2026-01-17 00:43:49.256396 | orchestrator |  "data": "osd-block-68934a0c-2b18-58d2-8851-459d4d664360", 2026-01-17 00:43:49.256400 | orchestrator |  "data_vg": "ceph-68934a0c-2b18-58d2-8851-459d4d664360" 2026-01-17 00:43:49.256405 | orchestrator |  } 2026-01-17 00:43:49.256413 | orchestrator |  ] 2026-01-17 00:43:49.256417 | orchestrator |  } 2026-01-17 00:43:49.256422 | orchestrator | } 2026-01-17 00:43:49.256426 | orchestrator | 2026-01-17 00:43:49.256430 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2026-01-17 00:43:49.256434 | orchestrator | Saturday 17 January 2026 00:43:48 +0000 (0:00:00.233) 0:00:40.753 ****** 2026-01-17 00:43:49.256439 | orchestrator | changed: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2026-01-17 00:43:49.256443 | orchestrator | 2026-01-17 00:43:49.256448 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-17 00:43:49.256452 | orchestrator | testbed-node-3 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-01-17 00:43:49.256458 | orchestrator | testbed-node-4 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-01-17 00:43:49.256471 | orchestrator | testbed-node-5 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-01-17 00:43:49.256475 | orchestrator | 2026-01-17 00:43:49.256480 | orchestrator | 2026-01-17 00:43:49.256483 | orchestrator | 2026-01-17 00:43:49.256487 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-17 00:43:49.256497 | orchestrator | Saturday 17 January 2026 00:43:49 +0000 (0:00:01.050) 0:00:41.804 ****** 2026-01-17 00:43:49.256501 | orchestrator | =============================================================================== 2026-01-17 00:43:49.256505 | orchestrator | Write configuration file ------------------------------------------------ 3.54s 2026-01-17 00:43:49.256509 | orchestrator | Add known partitions to the list of available block devices ------------- 1.23s 2026-01-17 00:43:49.256512 | orchestrator | Add known links to the list of available block devices ------------------ 1.18s 2026-01-17 00:43:49.256516 | orchestrator | Get extra vars for Ceph configuration ----------------------------------- 1.11s 2026-01-17 00:43:49.256524 | orchestrator | Add known partitions to the list of available block devices ------------- 0.97s 2026-01-17 00:43:49.256528 | orchestrator | Add known partitions to the list of available block devices ------------- 0.93s 2026-01-17 00:43:49.256532 | orchestrator | Add known links to the list of available block devices ------------------ 0.90s 2026-01-17 00:43:49.256535 | orchestrator | Add known links to the list of available block devices ------------------ 0.90s 2026-01-17 00:43:49.256539 | orchestrator | Add known links to the list of available block devices ------------------ 0.77s 2026-01-17 00:43:49.256545 | orchestrator | Print configuration data ------------------------------------------------ 0.74s 2026-01-17 00:43:49.256551 | orchestrator | Set DB devices config data ---------------------------------------------- 0.73s 2026-01-17 00:43:49.256558 | orchestrator | Add known partitions to the list of available block devices ------------- 0.73s 2026-01-17 00:43:49.256563 | orchestrator | Add known partitions to the list of available block devices ------------- 0.71s 2026-01-17 00:43:49.256571 | orchestrator | Add known partitions to the list of available block devices ------------- 0.68s 2026-01-17 00:43:49.600166 | orchestrator | Get initial list of available block devices ----------------------------- 0.68s 2026-01-17 00:43:49.600242 | orchestrator | Add known partitions to the list of available block devices ------------- 0.68s 2026-01-17 00:43:49.600250 | orchestrator | Add known links to the list of available block devices ------------------ 0.67s 2026-01-17 00:43:49.600255 | orchestrator | Generate lvm_volumes structure (block + wal) ---------------------------- 0.63s 2026-01-17 00:43:49.600260 | orchestrator | Generate shared DB/WAL VG names ----------------------------------------- 0.62s 2026-01-17 00:43:49.600265 | orchestrator | Print WAL devices ------------------------------------------------------- 0.57s 2026-01-17 00:44:12.238639 | orchestrator | 2026-01-17 00:44:12 | INFO  | Task 88228dc3-fc5c-48bb-8181-ae6374b76a95 (sync inventory) is running in background. Output coming soon. 2026-01-17 00:44:39.402952 | orchestrator | 2026-01-17 00:44:13 | INFO  | Starting group_vars file reorganization 2026-01-17 00:44:39.403067 | orchestrator | 2026-01-17 00:44:13 | INFO  | Moved 0 file(s) to their respective directories 2026-01-17 00:44:39.403084 | orchestrator | 2026-01-17 00:44:13 | INFO  | Group_vars file reorganization completed 2026-01-17 00:44:39.403097 | orchestrator | 2026-01-17 00:44:16 | INFO  | Starting variable preparation from inventory 2026-01-17 00:44:39.403109 | orchestrator | 2026-01-17 00:44:19 | INFO  | Writing 050-kolla-ceph-rgw-hosts.yml with ceph_rgw_hosts 2026-01-17 00:44:39.403120 | orchestrator | 2026-01-17 00:44:19 | INFO  | Writing 050-infrastructure-cephclient-mons.yml with cephclient_mons 2026-01-17 00:44:39.403156 | orchestrator | 2026-01-17 00:44:19 | INFO  | Writing 050-ceph-cluster-fsid.yml with ceph_cluster_fsid 2026-01-17 00:44:39.403220 | orchestrator | 2026-01-17 00:44:19 | INFO  | 3 file(s) written, 6 host(s) processed 2026-01-17 00:44:39.403234 | orchestrator | 2026-01-17 00:44:19 | INFO  | Variable preparation completed 2026-01-17 00:44:39.403245 | orchestrator | 2026-01-17 00:44:21 | INFO  | Starting inventory overwrite handling 2026-01-17 00:44:39.403261 | orchestrator | 2026-01-17 00:44:21 | INFO  | Handling group overwrites in 99-overwrite 2026-01-17 00:44:39.403272 | orchestrator | 2026-01-17 00:44:21 | INFO  | Removing group frr:children from 60-generic 2026-01-17 00:44:39.403284 | orchestrator | 2026-01-17 00:44:21 | INFO  | Removing group netbird:children from 50-infrastructure 2026-01-17 00:44:39.403295 | orchestrator | 2026-01-17 00:44:21 | INFO  | Removing group ceph-rgw from 50-ceph 2026-01-17 00:44:39.403306 | orchestrator | 2026-01-17 00:44:21 | INFO  | Removing group ceph-mds from 50-ceph 2026-01-17 00:44:39.403317 | orchestrator | 2026-01-17 00:44:21 | INFO  | Handling group overwrites in 20-roles 2026-01-17 00:44:39.403406 | orchestrator | 2026-01-17 00:44:21 | INFO  | Removing group k3s_node from 50-infrastructure 2026-01-17 00:44:39.403430 | orchestrator | 2026-01-17 00:44:21 | INFO  | Removed 5 group(s) in total 2026-01-17 00:44:39.403450 | orchestrator | 2026-01-17 00:44:21 | INFO  | Inventory overwrite handling completed 2026-01-17 00:44:39.403470 | orchestrator | 2026-01-17 00:44:22 | INFO  | Starting merge of inventory files 2026-01-17 00:44:39.403490 | orchestrator | 2026-01-17 00:44:22 | INFO  | Inventory files merged successfully 2026-01-17 00:44:39.403505 | orchestrator | 2026-01-17 00:44:27 | INFO  | Generating ClusterShell configuration from Ansible inventory 2026-01-17 00:44:39.403517 | orchestrator | 2026-01-17 00:44:38 | INFO  | Successfully wrote ClusterShell configuration 2026-01-17 00:44:39.403537 | orchestrator | [master 3efb72f] 2026-01-17-00-44 2026-01-17 00:44:39.403557 | orchestrator | 1 file changed, 30 insertions(+), 9 deletions(-) 2026-01-17 00:44:41.822447 | orchestrator | 2026-01-17 00:44:41 | INFO  | Task 4e6f8988-67b1-4994-bd7e-425f25c03e7e (ceph-create-lvm-devices) was prepared for execution. 2026-01-17 00:44:41.822543 | orchestrator | 2026-01-17 00:44:41 | INFO  | It takes a moment until task 4e6f8988-67b1-4994-bd7e-425f25c03e7e (ceph-create-lvm-devices) has been started and output is visible here. 2026-01-17 00:44:55.380681 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-01-17 00:44:55.380775 | orchestrator | 2.16.14 2026-01-17 00:44:55.380784 | orchestrator | 2026-01-17 00:44:55.380789 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2026-01-17 00:44:55.380794 | orchestrator | 2026-01-17 00:44:55.380798 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-01-17 00:44:55.380802 | orchestrator | Saturday 17 January 2026 00:44:47 +0000 (0:00:00.411) 0:00:00.411 ****** 2026-01-17 00:44:55.380807 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-01-17 00:44:55.380812 | orchestrator | 2026-01-17 00:44:55.380816 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-01-17 00:44:55.380820 | orchestrator | Saturday 17 January 2026 00:44:47 +0000 (0:00:00.281) 0:00:00.693 ****** 2026-01-17 00:44:55.380824 | orchestrator | ok: [testbed-node-3] 2026-01-17 00:44:55.380828 | orchestrator | 2026-01-17 00:44:55.380833 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-17 00:44:55.380837 | orchestrator | Saturday 17 January 2026 00:44:47 +0000 (0:00:00.247) 0:00:00.941 ****** 2026-01-17 00:44:55.380841 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop0) 2026-01-17 00:44:55.380845 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop1) 2026-01-17 00:44:55.380849 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop2) 2026-01-17 00:44:55.380853 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop3) 2026-01-17 00:44:55.380857 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop4) 2026-01-17 00:44:55.380860 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop5) 2026-01-17 00:44:55.380864 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop6) 2026-01-17 00:44:55.380868 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop7) 2026-01-17 00:44:55.380872 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sda) 2026-01-17 00:44:55.380876 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdb) 2026-01-17 00:44:55.380888 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdc) 2026-01-17 00:44:55.380987 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdd) 2026-01-17 00:44:55.381009 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sr0) 2026-01-17 00:44:55.381013 | orchestrator | 2026-01-17 00:44:55.381016 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-17 00:44:55.381020 | orchestrator | Saturday 17 January 2026 00:44:48 +0000 (0:00:00.644) 0:00:01.586 ****** 2026-01-17 00:44:55.381024 | orchestrator | skipping: [testbed-node-3] 2026-01-17 00:44:55.381029 | orchestrator | 2026-01-17 00:44:55.381035 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-17 00:44:55.381042 | orchestrator | Saturday 17 January 2026 00:44:48 +0000 (0:00:00.188) 0:00:01.774 ****** 2026-01-17 00:44:55.381047 | orchestrator | skipping: [testbed-node-3] 2026-01-17 00:44:55.381052 | orchestrator | 2026-01-17 00:44:55.381058 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-17 00:44:55.381064 | orchestrator | Saturday 17 January 2026 00:44:48 +0000 (0:00:00.234) 0:00:02.008 ****** 2026-01-17 00:44:55.381073 | orchestrator | skipping: [testbed-node-3] 2026-01-17 00:44:55.381081 | orchestrator | 2026-01-17 00:44:55.381087 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-17 00:44:55.381093 | orchestrator | Saturday 17 January 2026 00:44:49 +0000 (0:00:00.204) 0:00:02.213 ****** 2026-01-17 00:44:55.381099 | orchestrator | skipping: [testbed-node-3] 2026-01-17 00:44:55.381105 | orchestrator | 2026-01-17 00:44:55.381111 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-17 00:44:55.381117 | orchestrator | Saturday 17 January 2026 00:44:49 +0000 (0:00:00.199) 0:00:02.412 ****** 2026-01-17 00:44:55.381123 | orchestrator | skipping: [testbed-node-3] 2026-01-17 00:44:55.381128 | orchestrator | 2026-01-17 00:44:55.381135 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-17 00:44:55.381141 | orchestrator | Saturday 17 January 2026 00:44:49 +0000 (0:00:00.223) 0:00:02.636 ****** 2026-01-17 00:44:55.381147 | orchestrator | skipping: [testbed-node-3] 2026-01-17 00:44:55.381153 | orchestrator | 2026-01-17 00:44:55.381159 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-17 00:44:55.381165 | orchestrator | Saturday 17 January 2026 00:44:49 +0000 (0:00:00.203) 0:00:02.839 ****** 2026-01-17 00:44:55.381171 | orchestrator | skipping: [testbed-node-3] 2026-01-17 00:44:55.381177 | orchestrator | 2026-01-17 00:44:55.381209 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-17 00:44:55.381215 | orchestrator | Saturday 17 January 2026 00:44:49 +0000 (0:00:00.193) 0:00:03.033 ****** 2026-01-17 00:44:55.381221 | orchestrator | skipping: [testbed-node-3] 2026-01-17 00:44:55.381227 | orchestrator | 2026-01-17 00:44:55.381233 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-17 00:44:55.381240 | orchestrator | Saturday 17 January 2026 00:44:50 +0000 (0:00:00.208) 0:00:03.241 ****** 2026-01-17 00:44:55.381247 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_a81b0aae-ecd3-46bc-81d3-c119638f529b) 2026-01-17 00:44:55.381255 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_a81b0aae-ecd3-46bc-81d3-c119638f529b) 2026-01-17 00:44:55.381261 | orchestrator | 2026-01-17 00:44:55.381268 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-17 00:44:55.381289 | orchestrator | Saturday 17 January 2026 00:44:50 +0000 (0:00:00.433) 0:00:03.675 ****** 2026-01-17 00:44:55.381296 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_03c99a05-96d9-4471-aa9e-2837c3fbd541) 2026-01-17 00:44:55.381303 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_03c99a05-96d9-4471-aa9e-2837c3fbd541) 2026-01-17 00:44:55.381310 | orchestrator | 2026-01-17 00:44:55.381316 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-17 00:44:55.381323 | orchestrator | Saturday 17 January 2026 00:44:51 +0000 (0:00:00.647) 0:00:04.322 ****** 2026-01-17 00:44:55.381329 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_386eb8af-61b6-405b-8873-9456a29b0ccf) 2026-01-17 00:44:55.381345 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_386eb8af-61b6-405b-8873-9456a29b0ccf) 2026-01-17 00:44:55.381352 | orchestrator | 2026-01-17 00:44:55.381358 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-17 00:44:55.381365 | orchestrator | Saturday 17 January 2026 00:44:51 +0000 (0:00:00.704) 0:00:05.026 ****** 2026-01-17 00:44:55.381371 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_66cad329-aa8c-4366-8769-2bca3a7bcb41) 2026-01-17 00:44:55.381378 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_66cad329-aa8c-4366-8769-2bca3a7bcb41) 2026-01-17 00:44:55.381384 | orchestrator | 2026-01-17 00:44:55.381391 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-17 00:44:55.381397 | orchestrator | Saturday 17 January 2026 00:44:52 +0000 (0:00:00.930) 0:00:05.957 ****** 2026-01-17 00:44:55.381404 | orchestrator | ok: [testbed-node-3] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-01-17 00:44:55.381410 | orchestrator | 2026-01-17 00:44:55.381417 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-17 00:44:55.381423 | orchestrator | Saturday 17 January 2026 00:44:53 +0000 (0:00:00.392) 0:00:06.349 ****** 2026-01-17 00:44:55.381430 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop0) 2026-01-17 00:44:55.381436 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop1) 2026-01-17 00:44:55.381443 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop2) 2026-01-17 00:44:55.381464 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop3) 2026-01-17 00:44:55.381470 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop4) 2026-01-17 00:44:55.381477 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop5) 2026-01-17 00:44:55.381483 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop6) 2026-01-17 00:44:55.381489 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop7) 2026-01-17 00:44:55.381496 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sda) 2026-01-17 00:44:55.381502 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdb) 2026-01-17 00:44:55.381509 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdc) 2026-01-17 00:44:55.381518 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdd) 2026-01-17 00:44:55.381525 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sr0) 2026-01-17 00:44:55.381532 | orchestrator | 2026-01-17 00:44:55.381538 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-17 00:44:55.381544 | orchestrator | Saturday 17 January 2026 00:44:53 +0000 (0:00:00.544) 0:00:06.893 ****** 2026-01-17 00:44:55.381550 | orchestrator | skipping: [testbed-node-3] 2026-01-17 00:44:55.381555 | orchestrator | 2026-01-17 00:44:55.381559 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-17 00:44:55.381563 | orchestrator | Saturday 17 January 2026 00:44:54 +0000 (0:00:00.265) 0:00:07.159 ****** 2026-01-17 00:44:55.381567 | orchestrator | skipping: [testbed-node-3] 2026-01-17 00:44:55.381572 | orchestrator | 2026-01-17 00:44:55.381576 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-17 00:44:55.381580 | orchestrator | Saturday 17 January 2026 00:44:54 +0000 (0:00:00.199) 0:00:07.358 ****** 2026-01-17 00:44:55.381584 | orchestrator | skipping: [testbed-node-3] 2026-01-17 00:44:55.381589 | orchestrator | 2026-01-17 00:44:55.381593 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-17 00:44:55.381597 | orchestrator | Saturday 17 January 2026 00:44:54 +0000 (0:00:00.240) 0:00:07.599 ****** 2026-01-17 00:44:55.381602 | orchestrator | skipping: [testbed-node-3] 2026-01-17 00:44:55.381609 | orchestrator | 2026-01-17 00:44:55.381615 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-17 00:44:55.381621 | orchestrator | Saturday 17 January 2026 00:44:54 +0000 (0:00:00.219) 0:00:07.819 ****** 2026-01-17 00:44:55.381631 | orchestrator | skipping: [testbed-node-3] 2026-01-17 00:44:55.381637 | orchestrator | 2026-01-17 00:44:55.381643 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-17 00:44:55.381649 | orchestrator | Saturday 17 January 2026 00:44:54 +0000 (0:00:00.245) 0:00:08.064 ****** 2026-01-17 00:44:55.381655 | orchestrator | skipping: [testbed-node-3] 2026-01-17 00:44:55.381662 | orchestrator | 2026-01-17 00:44:55.381668 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-17 00:44:55.381673 | orchestrator | Saturday 17 January 2026 00:44:55 +0000 (0:00:00.211) 0:00:08.275 ****** 2026-01-17 00:44:55.381681 | orchestrator | skipping: [testbed-node-3] 2026-01-17 00:44:55.381685 | orchestrator | 2026-01-17 00:44:55.381693 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-17 00:45:03.808652 | orchestrator | Saturday 17 January 2026 00:44:55 +0000 (0:00:00.223) 0:00:08.499 ****** 2026-01-17 00:45:03.808734 | orchestrator | skipping: [testbed-node-3] 2026-01-17 00:45:03.808743 | orchestrator | 2026-01-17 00:45:03.808750 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-17 00:45:03.808757 | orchestrator | Saturday 17 January 2026 00:44:55 +0000 (0:00:00.241) 0:00:08.741 ****** 2026-01-17 00:45:03.808763 | orchestrator | ok: [testbed-node-3] => (item=sda1) 2026-01-17 00:45:03.808770 | orchestrator | ok: [testbed-node-3] => (item=sda14) 2026-01-17 00:45:03.808777 | orchestrator | ok: [testbed-node-3] => (item=sda15) 2026-01-17 00:45:03.808782 | orchestrator | ok: [testbed-node-3] => (item=sda16) 2026-01-17 00:45:03.808788 | orchestrator | 2026-01-17 00:45:03.808794 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-17 00:45:03.808800 | orchestrator | Saturday 17 January 2026 00:44:56 +0000 (0:00:01.154) 0:00:09.895 ****** 2026-01-17 00:45:03.808806 | orchestrator | skipping: [testbed-node-3] 2026-01-17 00:45:03.808812 | orchestrator | 2026-01-17 00:45:03.808818 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-17 00:45:03.808824 | orchestrator | Saturday 17 January 2026 00:44:57 +0000 (0:00:00.261) 0:00:10.157 ****** 2026-01-17 00:45:03.808829 | orchestrator | skipping: [testbed-node-3] 2026-01-17 00:45:03.808835 | orchestrator | 2026-01-17 00:45:03.808841 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-17 00:45:03.808847 | orchestrator | Saturday 17 January 2026 00:44:57 +0000 (0:00:00.236) 0:00:10.394 ****** 2026-01-17 00:45:03.808853 | orchestrator | skipping: [testbed-node-3] 2026-01-17 00:45:03.808859 | orchestrator | 2026-01-17 00:45:03.808865 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-17 00:45:03.808871 | orchestrator | Saturday 17 January 2026 00:44:57 +0000 (0:00:00.266) 0:00:10.661 ****** 2026-01-17 00:45:03.808876 | orchestrator | skipping: [testbed-node-3] 2026-01-17 00:45:03.808882 | orchestrator | 2026-01-17 00:45:03.808888 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2026-01-17 00:45:03.808894 | orchestrator | Saturday 17 January 2026 00:44:57 +0000 (0:00:00.222) 0:00:10.883 ****** 2026-01-17 00:45:03.808900 | orchestrator | skipping: [testbed-node-3] 2026-01-17 00:45:03.808906 | orchestrator | 2026-01-17 00:45:03.808911 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2026-01-17 00:45:03.808917 | orchestrator | Saturday 17 January 2026 00:44:57 +0000 (0:00:00.135) 0:00:11.019 ****** 2026-01-17 00:45:03.808924 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'c5f49b22-d40f-5ab7-98f7-9762e23da2c0'}}) 2026-01-17 00:45:03.808930 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '2051e43b-6678-567a-85ad-b7e1187d21ae'}}) 2026-01-17 00:45:03.808936 | orchestrator | 2026-01-17 00:45:03.808942 | orchestrator | TASK [Create block VGs] ******************************************************** 2026-01-17 00:45:03.808970 | orchestrator | Saturday 17 January 2026 00:44:58 +0000 (0:00:00.195) 0:00:11.215 ****** 2026-01-17 00:45:03.808977 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-c5f49b22-d40f-5ab7-98f7-9762e23da2c0', 'data_vg': 'ceph-c5f49b22-d40f-5ab7-98f7-9762e23da2c0'}) 2026-01-17 00:45:03.808984 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-2051e43b-6678-567a-85ad-b7e1187d21ae', 'data_vg': 'ceph-2051e43b-6678-567a-85ad-b7e1187d21ae'}) 2026-01-17 00:45:03.808990 | orchestrator | 2026-01-17 00:45:03.808996 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2026-01-17 00:45:03.809002 | orchestrator | Saturday 17 January 2026 00:45:00 +0000 (0:00:02.013) 0:00:13.228 ****** 2026-01-17 00:45:03.809008 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-c5f49b22-d40f-5ab7-98f7-9762e23da2c0', 'data_vg': 'ceph-c5f49b22-d40f-5ab7-98f7-9762e23da2c0'})  2026-01-17 00:45:03.809015 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-2051e43b-6678-567a-85ad-b7e1187d21ae', 'data_vg': 'ceph-2051e43b-6678-567a-85ad-b7e1187d21ae'})  2026-01-17 00:45:03.809021 | orchestrator | skipping: [testbed-node-3] 2026-01-17 00:45:03.809027 | orchestrator | 2026-01-17 00:45:03.809033 | orchestrator | TASK [Create block LVs] ******************************************************** 2026-01-17 00:45:03.809039 | orchestrator | Saturday 17 January 2026 00:45:00 +0000 (0:00:00.159) 0:00:13.387 ****** 2026-01-17 00:45:03.809044 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-c5f49b22-d40f-5ab7-98f7-9762e23da2c0', 'data_vg': 'ceph-c5f49b22-d40f-5ab7-98f7-9762e23da2c0'}) 2026-01-17 00:45:03.809050 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-2051e43b-6678-567a-85ad-b7e1187d21ae', 'data_vg': 'ceph-2051e43b-6678-567a-85ad-b7e1187d21ae'}) 2026-01-17 00:45:03.809056 | orchestrator | 2026-01-17 00:45:03.809062 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2026-01-17 00:45:03.809068 | orchestrator | Saturday 17 January 2026 00:45:01 +0000 (0:00:01.411) 0:00:14.799 ****** 2026-01-17 00:45:03.809074 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-c5f49b22-d40f-5ab7-98f7-9762e23da2c0', 'data_vg': 'ceph-c5f49b22-d40f-5ab7-98f7-9762e23da2c0'})  2026-01-17 00:45:03.809080 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-2051e43b-6678-567a-85ad-b7e1187d21ae', 'data_vg': 'ceph-2051e43b-6678-567a-85ad-b7e1187d21ae'})  2026-01-17 00:45:03.809086 | orchestrator | skipping: [testbed-node-3] 2026-01-17 00:45:03.809092 | orchestrator | 2026-01-17 00:45:03.809098 | orchestrator | TASK [Create DB VGs] *********************************************************** 2026-01-17 00:45:03.809103 | orchestrator | Saturday 17 January 2026 00:45:01 +0000 (0:00:00.162) 0:00:14.962 ****** 2026-01-17 00:45:03.809122 | orchestrator | skipping: [testbed-node-3] 2026-01-17 00:45:03.809128 | orchestrator | 2026-01-17 00:45:03.809134 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2026-01-17 00:45:03.809140 | orchestrator | Saturday 17 January 2026 00:45:01 +0000 (0:00:00.165) 0:00:15.128 ****** 2026-01-17 00:45:03.809145 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-c5f49b22-d40f-5ab7-98f7-9762e23da2c0', 'data_vg': 'ceph-c5f49b22-d40f-5ab7-98f7-9762e23da2c0'})  2026-01-17 00:45:03.809151 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-2051e43b-6678-567a-85ad-b7e1187d21ae', 'data_vg': 'ceph-2051e43b-6678-567a-85ad-b7e1187d21ae'})  2026-01-17 00:45:03.809157 | orchestrator | skipping: [testbed-node-3] 2026-01-17 00:45:03.809163 | orchestrator | 2026-01-17 00:45:03.809169 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2026-01-17 00:45:03.809174 | orchestrator | Saturday 17 January 2026 00:45:02 +0000 (0:00:00.451) 0:00:15.580 ****** 2026-01-17 00:45:03.809180 | orchestrator | skipping: [testbed-node-3] 2026-01-17 00:45:03.809186 | orchestrator | 2026-01-17 00:45:03.809245 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2026-01-17 00:45:03.809253 | orchestrator | Saturday 17 January 2026 00:45:02 +0000 (0:00:00.135) 0:00:15.716 ****** 2026-01-17 00:45:03.809264 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-c5f49b22-d40f-5ab7-98f7-9762e23da2c0', 'data_vg': 'ceph-c5f49b22-d40f-5ab7-98f7-9762e23da2c0'})  2026-01-17 00:45:03.809271 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-2051e43b-6678-567a-85ad-b7e1187d21ae', 'data_vg': 'ceph-2051e43b-6678-567a-85ad-b7e1187d21ae'})  2026-01-17 00:45:03.809278 | orchestrator | skipping: [testbed-node-3] 2026-01-17 00:45:03.809285 | orchestrator | 2026-01-17 00:45:03.809292 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2026-01-17 00:45:03.809299 | orchestrator | Saturday 17 January 2026 00:45:02 +0000 (0:00:00.156) 0:00:15.873 ****** 2026-01-17 00:45:03.809306 | orchestrator | skipping: [testbed-node-3] 2026-01-17 00:45:03.809313 | orchestrator | 2026-01-17 00:45:03.809320 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2026-01-17 00:45:03.809327 | orchestrator | Saturday 17 January 2026 00:45:02 +0000 (0:00:00.136) 0:00:16.009 ****** 2026-01-17 00:45:03.809333 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-c5f49b22-d40f-5ab7-98f7-9762e23da2c0', 'data_vg': 'ceph-c5f49b22-d40f-5ab7-98f7-9762e23da2c0'})  2026-01-17 00:45:03.809340 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-2051e43b-6678-567a-85ad-b7e1187d21ae', 'data_vg': 'ceph-2051e43b-6678-567a-85ad-b7e1187d21ae'})  2026-01-17 00:45:03.809347 | orchestrator | skipping: [testbed-node-3] 2026-01-17 00:45:03.809354 | orchestrator | 2026-01-17 00:45:03.809361 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2026-01-17 00:45:03.809367 | orchestrator | Saturday 17 January 2026 00:45:03 +0000 (0:00:00.177) 0:00:16.187 ****** 2026-01-17 00:45:03.809374 | orchestrator | ok: [testbed-node-3] 2026-01-17 00:45:03.809381 | orchestrator | 2026-01-17 00:45:03.809388 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2026-01-17 00:45:03.809409 | orchestrator | Saturday 17 January 2026 00:45:03 +0000 (0:00:00.147) 0:00:16.334 ****** 2026-01-17 00:45:03.809420 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-c5f49b22-d40f-5ab7-98f7-9762e23da2c0', 'data_vg': 'ceph-c5f49b22-d40f-5ab7-98f7-9762e23da2c0'})  2026-01-17 00:45:03.809427 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-2051e43b-6678-567a-85ad-b7e1187d21ae', 'data_vg': 'ceph-2051e43b-6678-567a-85ad-b7e1187d21ae'})  2026-01-17 00:45:03.809434 | orchestrator | skipping: [testbed-node-3] 2026-01-17 00:45:03.809441 | orchestrator | 2026-01-17 00:45:03.809448 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2026-01-17 00:45:03.809455 | orchestrator | Saturday 17 January 2026 00:45:03 +0000 (0:00:00.147) 0:00:16.481 ****** 2026-01-17 00:45:03.809462 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-c5f49b22-d40f-5ab7-98f7-9762e23da2c0', 'data_vg': 'ceph-c5f49b22-d40f-5ab7-98f7-9762e23da2c0'})  2026-01-17 00:45:03.809469 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-2051e43b-6678-567a-85ad-b7e1187d21ae', 'data_vg': 'ceph-2051e43b-6678-567a-85ad-b7e1187d21ae'})  2026-01-17 00:45:03.809475 | orchestrator | skipping: [testbed-node-3] 2026-01-17 00:45:03.809482 | orchestrator | 2026-01-17 00:45:03.809489 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2026-01-17 00:45:03.809496 | orchestrator | Saturday 17 January 2026 00:45:03 +0000 (0:00:00.156) 0:00:16.637 ****** 2026-01-17 00:45:03.809502 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-c5f49b22-d40f-5ab7-98f7-9762e23da2c0', 'data_vg': 'ceph-c5f49b22-d40f-5ab7-98f7-9762e23da2c0'})  2026-01-17 00:45:03.809509 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-2051e43b-6678-567a-85ad-b7e1187d21ae', 'data_vg': 'ceph-2051e43b-6678-567a-85ad-b7e1187d21ae'})  2026-01-17 00:45:03.809516 | orchestrator | skipping: [testbed-node-3] 2026-01-17 00:45:03.809523 | orchestrator | 2026-01-17 00:45:03.809529 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2026-01-17 00:45:03.809541 | orchestrator | Saturday 17 January 2026 00:45:03 +0000 (0:00:00.154) 0:00:16.792 ****** 2026-01-17 00:45:03.809549 | orchestrator | skipping: [testbed-node-3] 2026-01-17 00:45:03.809555 | orchestrator | 2026-01-17 00:45:03.809562 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2026-01-17 00:45:03.809574 | orchestrator | Saturday 17 January 2026 00:45:03 +0000 (0:00:00.142) 0:00:16.934 ****** 2026-01-17 00:45:10.715421 | orchestrator | skipping: [testbed-node-3] 2026-01-17 00:45:10.715553 | orchestrator | 2026-01-17 00:45:10.715570 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2026-01-17 00:45:10.715583 | orchestrator | Saturday 17 January 2026 00:45:03 +0000 (0:00:00.130) 0:00:17.065 ****** 2026-01-17 00:45:10.715595 | orchestrator | skipping: [testbed-node-3] 2026-01-17 00:45:10.715606 | orchestrator | 2026-01-17 00:45:10.715617 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2026-01-17 00:45:10.715628 | orchestrator | Saturday 17 January 2026 00:45:04 +0000 (0:00:00.131) 0:00:17.196 ****** 2026-01-17 00:45:10.715639 | orchestrator | ok: [testbed-node-3] => { 2026-01-17 00:45:10.715650 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2026-01-17 00:45:10.715661 | orchestrator | } 2026-01-17 00:45:10.715672 | orchestrator | 2026-01-17 00:45:10.715683 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2026-01-17 00:45:10.715694 | orchestrator | Saturday 17 January 2026 00:45:04 +0000 (0:00:00.387) 0:00:17.584 ****** 2026-01-17 00:45:10.715705 | orchestrator | ok: [testbed-node-3] => { 2026-01-17 00:45:10.715716 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2026-01-17 00:45:10.715727 | orchestrator | } 2026-01-17 00:45:10.715737 | orchestrator | 2026-01-17 00:45:10.715748 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2026-01-17 00:45:10.715759 | orchestrator | Saturday 17 January 2026 00:45:04 +0000 (0:00:00.149) 0:00:17.733 ****** 2026-01-17 00:45:10.715770 | orchestrator | ok: [testbed-node-3] => { 2026-01-17 00:45:10.715781 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2026-01-17 00:45:10.715792 | orchestrator | } 2026-01-17 00:45:10.715803 | orchestrator | 2026-01-17 00:45:10.715814 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2026-01-17 00:45:10.715825 | orchestrator | Saturday 17 January 2026 00:45:04 +0000 (0:00:00.158) 0:00:17.892 ****** 2026-01-17 00:45:10.715835 | orchestrator | ok: [testbed-node-3] 2026-01-17 00:45:10.715846 | orchestrator | 2026-01-17 00:45:10.715857 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2026-01-17 00:45:10.715868 | orchestrator | Saturday 17 January 2026 00:45:05 +0000 (0:00:00.701) 0:00:18.593 ****** 2026-01-17 00:45:10.715879 | orchestrator | ok: [testbed-node-3] 2026-01-17 00:45:10.715890 | orchestrator | 2026-01-17 00:45:10.715900 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2026-01-17 00:45:10.715911 | orchestrator | Saturday 17 January 2026 00:45:06 +0000 (0:00:00.576) 0:00:19.169 ****** 2026-01-17 00:45:10.715922 | orchestrator | ok: [testbed-node-3] 2026-01-17 00:45:10.715933 | orchestrator | 2026-01-17 00:45:10.715945 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2026-01-17 00:45:10.715958 | orchestrator | Saturday 17 January 2026 00:45:06 +0000 (0:00:00.571) 0:00:19.740 ****** 2026-01-17 00:45:10.715970 | orchestrator | ok: [testbed-node-3] 2026-01-17 00:45:10.715988 | orchestrator | 2026-01-17 00:45:10.716008 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2026-01-17 00:45:10.716028 | orchestrator | Saturday 17 January 2026 00:45:06 +0000 (0:00:00.154) 0:00:19.895 ****** 2026-01-17 00:45:10.716048 | orchestrator | skipping: [testbed-node-3] 2026-01-17 00:45:10.716062 | orchestrator | 2026-01-17 00:45:10.716076 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2026-01-17 00:45:10.716087 | orchestrator | Saturday 17 January 2026 00:45:06 +0000 (0:00:00.124) 0:00:20.019 ****** 2026-01-17 00:45:10.716098 | orchestrator | skipping: [testbed-node-3] 2026-01-17 00:45:10.716109 | orchestrator | 2026-01-17 00:45:10.716120 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2026-01-17 00:45:10.716175 | orchestrator | Saturday 17 January 2026 00:45:07 +0000 (0:00:00.124) 0:00:20.143 ****** 2026-01-17 00:45:10.716187 | orchestrator | ok: [testbed-node-3] => { 2026-01-17 00:45:10.716226 | orchestrator |  "vgs_report": { 2026-01-17 00:45:10.716240 | orchestrator |  "vg": [] 2026-01-17 00:45:10.716251 | orchestrator |  } 2026-01-17 00:45:10.716262 | orchestrator | } 2026-01-17 00:45:10.716273 | orchestrator | 2026-01-17 00:45:10.716284 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2026-01-17 00:45:10.716295 | orchestrator | Saturday 17 January 2026 00:45:07 +0000 (0:00:00.156) 0:00:20.299 ****** 2026-01-17 00:45:10.716306 | orchestrator | skipping: [testbed-node-3] 2026-01-17 00:45:10.716316 | orchestrator | 2026-01-17 00:45:10.716327 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2026-01-17 00:45:10.716338 | orchestrator | Saturday 17 January 2026 00:45:07 +0000 (0:00:00.146) 0:00:20.446 ****** 2026-01-17 00:45:10.716353 | orchestrator | skipping: [testbed-node-3] 2026-01-17 00:45:10.716372 | orchestrator | 2026-01-17 00:45:10.716389 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2026-01-17 00:45:10.716407 | orchestrator | Saturday 17 January 2026 00:45:07 +0000 (0:00:00.149) 0:00:20.596 ****** 2026-01-17 00:45:10.716424 | orchestrator | skipping: [testbed-node-3] 2026-01-17 00:45:10.716442 | orchestrator | 2026-01-17 00:45:10.716461 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2026-01-17 00:45:10.716479 | orchestrator | Saturday 17 January 2026 00:45:07 +0000 (0:00:00.378) 0:00:20.974 ****** 2026-01-17 00:45:10.716499 | orchestrator | skipping: [testbed-node-3] 2026-01-17 00:45:10.716518 | orchestrator | 2026-01-17 00:45:10.716538 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2026-01-17 00:45:10.716551 | orchestrator | Saturday 17 January 2026 00:45:08 +0000 (0:00:00.173) 0:00:21.148 ****** 2026-01-17 00:45:10.716562 | orchestrator | skipping: [testbed-node-3] 2026-01-17 00:45:10.716573 | orchestrator | 2026-01-17 00:45:10.716584 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2026-01-17 00:45:10.716595 | orchestrator | Saturday 17 January 2026 00:45:08 +0000 (0:00:00.183) 0:00:21.331 ****** 2026-01-17 00:45:10.716606 | orchestrator | skipping: [testbed-node-3] 2026-01-17 00:45:10.716617 | orchestrator | 2026-01-17 00:45:10.716627 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2026-01-17 00:45:10.716638 | orchestrator | Saturday 17 January 2026 00:45:08 +0000 (0:00:00.140) 0:00:21.472 ****** 2026-01-17 00:45:10.716649 | orchestrator | skipping: [testbed-node-3] 2026-01-17 00:45:10.716660 | orchestrator | 2026-01-17 00:45:10.716671 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2026-01-17 00:45:10.716682 | orchestrator | Saturday 17 January 2026 00:45:08 +0000 (0:00:00.147) 0:00:21.619 ****** 2026-01-17 00:45:10.716714 | orchestrator | skipping: [testbed-node-3] 2026-01-17 00:45:10.716726 | orchestrator | 2026-01-17 00:45:10.716737 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2026-01-17 00:45:10.716748 | orchestrator | Saturday 17 January 2026 00:45:08 +0000 (0:00:00.144) 0:00:21.764 ****** 2026-01-17 00:45:10.716758 | orchestrator | skipping: [testbed-node-3] 2026-01-17 00:45:10.716769 | orchestrator | 2026-01-17 00:45:10.716780 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2026-01-17 00:45:10.716791 | orchestrator | Saturday 17 January 2026 00:45:08 +0000 (0:00:00.131) 0:00:21.895 ****** 2026-01-17 00:45:10.716801 | orchestrator | skipping: [testbed-node-3] 2026-01-17 00:45:10.716812 | orchestrator | 2026-01-17 00:45:10.716823 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2026-01-17 00:45:10.716833 | orchestrator | Saturday 17 January 2026 00:45:08 +0000 (0:00:00.132) 0:00:22.027 ****** 2026-01-17 00:45:10.716844 | orchestrator | skipping: [testbed-node-3] 2026-01-17 00:45:10.716855 | orchestrator | 2026-01-17 00:45:10.716865 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2026-01-17 00:45:10.716876 | orchestrator | Saturday 17 January 2026 00:45:09 +0000 (0:00:00.162) 0:00:22.190 ****** 2026-01-17 00:45:10.716899 | orchestrator | skipping: [testbed-node-3] 2026-01-17 00:45:10.716910 | orchestrator | 2026-01-17 00:45:10.716921 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2026-01-17 00:45:10.716932 | orchestrator | Saturday 17 January 2026 00:45:09 +0000 (0:00:00.126) 0:00:22.317 ****** 2026-01-17 00:45:10.716943 | orchestrator | skipping: [testbed-node-3] 2026-01-17 00:45:10.716953 | orchestrator | 2026-01-17 00:45:10.716964 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2026-01-17 00:45:10.716975 | orchestrator | Saturday 17 January 2026 00:45:09 +0000 (0:00:00.140) 0:00:22.458 ****** 2026-01-17 00:45:10.716986 | orchestrator | skipping: [testbed-node-3] 2026-01-17 00:45:10.716996 | orchestrator | 2026-01-17 00:45:10.717007 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2026-01-17 00:45:10.717018 | orchestrator | Saturday 17 January 2026 00:45:09 +0000 (0:00:00.139) 0:00:22.598 ****** 2026-01-17 00:45:10.717030 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-c5f49b22-d40f-5ab7-98f7-9762e23da2c0', 'data_vg': 'ceph-c5f49b22-d40f-5ab7-98f7-9762e23da2c0'})  2026-01-17 00:45:10.717043 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-2051e43b-6678-567a-85ad-b7e1187d21ae', 'data_vg': 'ceph-2051e43b-6678-567a-85ad-b7e1187d21ae'})  2026-01-17 00:45:10.717062 | orchestrator | skipping: [testbed-node-3] 2026-01-17 00:45:10.717081 | orchestrator | 2026-01-17 00:45:10.717100 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2026-01-17 00:45:10.717118 | orchestrator | Saturday 17 January 2026 00:45:09 +0000 (0:00:00.393) 0:00:22.991 ****** 2026-01-17 00:45:10.717134 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-c5f49b22-d40f-5ab7-98f7-9762e23da2c0', 'data_vg': 'ceph-c5f49b22-d40f-5ab7-98f7-9762e23da2c0'})  2026-01-17 00:45:10.717153 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-2051e43b-6678-567a-85ad-b7e1187d21ae', 'data_vg': 'ceph-2051e43b-6678-567a-85ad-b7e1187d21ae'})  2026-01-17 00:45:10.717172 | orchestrator | skipping: [testbed-node-3] 2026-01-17 00:45:10.717192 | orchestrator | 2026-01-17 00:45:10.717291 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2026-01-17 00:45:10.717304 | orchestrator | Saturday 17 January 2026 00:45:10 +0000 (0:00:00.170) 0:00:23.161 ****** 2026-01-17 00:45:10.717315 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-c5f49b22-d40f-5ab7-98f7-9762e23da2c0', 'data_vg': 'ceph-c5f49b22-d40f-5ab7-98f7-9762e23da2c0'})  2026-01-17 00:45:10.717326 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-2051e43b-6678-567a-85ad-b7e1187d21ae', 'data_vg': 'ceph-2051e43b-6678-567a-85ad-b7e1187d21ae'})  2026-01-17 00:45:10.717337 | orchestrator | skipping: [testbed-node-3] 2026-01-17 00:45:10.717348 | orchestrator | 2026-01-17 00:45:10.717359 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2026-01-17 00:45:10.717370 | orchestrator | Saturday 17 January 2026 00:45:10 +0000 (0:00:00.152) 0:00:23.313 ****** 2026-01-17 00:45:10.717381 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-c5f49b22-d40f-5ab7-98f7-9762e23da2c0', 'data_vg': 'ceph-c5f49b22-d40f-5ab7-98f7-9762e23da2c0'})  2026-01-17 00:45:10.717392 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-2051e43b-6678-567a-85ad-b7e1187d21ae', 'data_vg': 'ceph-2051e43b-6678-567a-85ad-b7e1187d21ae'})  2026-01-17 00:45:10.717403 | orchestrator | skipping: [testbed-node-3] 2026-01-17 00:45:10.717414 | orchestrator | 2026-01-17 00:45:10.717425 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2026-01-17 00:45:10.717436 | orchestrator | Saturday 17 January 2026 00:45:10 +0000 (0:00:00.174) 0:00:23.487 ****** 2026-01-17 00:45:10.717447 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-c5f49b22-d40f-5ab7-98f7-9762e23da2c0', 'data_vg': 'ceph-c5f49b22-d40f-5ab7-98f7-9762e23da2c0'})  2026-01-17 00:45:10.717458 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-2051e43b-6678-567a-85ad-b7e1187d21ae', 'data_vg': 'ceph-2051e43b-6678-567a-85ad-b7e1187d21ae'})  2026-01-17 00:45:10.717479 | orchestrator | skipping: [testbed-node-3] 2026-01-17 00:45:10.717490 | orchestrator | 2026-01-17 00:45:10.717501 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2026-01-17 00:45:10.717522 | orchestrator | Saturday 17 January 2026 00:45:10 +0000 (0:00:00.193) 0:00:23.681 ****** 2026-01-17 00:45:10.717544 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-c5f49b22-d40f-5ab7-98f7-9762e23da2c0', 'data_vg': 'ceph-c5f49b22-d40f-5ab7-98f7-9762e23da2c0'})  2026-01-17 00:45:16.284463 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-2051e43b-6678-567a-85ad-b7e1187d21ae', 'data_vg': 'ceph-2051e43b-6678-567a-85ad-b7e1187d21ae'})  2026-01-17 00:45:16.284545 | orchestrator | skipping: [testbed-node-3] 2026-01-17 00:45:16.284553 | orchestrator | 2026-01-17 00:45:16.284561 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2026-01-17 00:45:16.284568 | orchestrator | Saturday 17 January 2026 00:45:10 +0000 (0:00:00.162) 0:00:23.843 ****** 2026-01-17 00:45:16.284574 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-c5f49b22-d40f-5ab7-98f7-9762e23da2c0', 'data_vg': 'ceph-c5f49b22-d40f-5ab7-98f7-9762e23da2c0'})  2026-01-17 00:45:16.284580 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-2051e43b-6678-567a-85ad-b7e1187d21ae', 'data_vg': 'ceph-2051e43b-6678-567a-85ad-b7e1187d21ae'})  2026-01-17 00:45:16.284586 | orchestrator | skipping: [testbed-node-3] 2026-01-17 00:45:16.284591 | orchestrator | 2026-01-17 00:45:16.284597 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2026-01-17 00:45:16.284602 | orchestrator | Saturday 17 January 2026 00:45:10 +0000 (0:00:00.164) 0:00:24.008 ****** 2026-01-17 00:45:16.284608 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-c5f49b22-d40f-5ab7-98f7-9762e23da2c0', 'data_vg': 'ceph-c5f49b22-d40f-5ab7-98f7-9762e23da2c0'})  2026-01-17 00:45:16.284614 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-2051e43b-6678-567a-85ad-b7e1187d21ae', 'data_vg': 'ceph-2051e43b-6678-567a-85ad-b7e1187d21ae'})  2026-01-17 00:45:16.284619 | orchestrator | skipping: [testbed-node-3] 2026-01-17 00:45:16.284625 | orchestrator | 2026-01-17 00:45:16.284630 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2026-01-17 00:45:16.284635 | orchestrator | Saturday 17 January 2026 00:45:11 +0000 (0:00:00.177) 0:00:24.185 ****** 2026-01-17 00:45:16.284641 | orchestrator | ok: [testbed-node-3] 2026-01-17 00:45:16.284647 | orchestrator | 2026-01-17 00:45:16.284653 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2026-01-17 00:45:16.284658 | orchestrator | Saturday 17 January 2026 00:45:11 +0000 (0:00:00.527) 0:00:24.713 ****** 2026-01-17 00:45:16.284664 | orchestrator | ok: [testbed-node-3] 2026-01-17 00:45:16.284669 | orchestrator | 2026-01-17 00:45:16.284674 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2026-01-17 00:45:16.284680 | orchestrator | Saturday 17 January 2026 00:45:12 +0000 (0:00:00.554) 0:00:25.268 ****** 2026-01-17 00:45:16.284685 | orchestrator | ok: [testbed-node-3] 2026-01-17 00:45:16.284690 | orchestrator | 2026-01-17 00:45:16.284696 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2026-01-17 00:45:16.284701 | orchestrator | Saturday 17 January 2026 00:45:12 +0000 (0:00:00.157) 0:00:25.426 ****** 2026-01-17 00:45:16.284707 | orchestrator | ok: [testbed-node-3] => (item={'lv_name': 'osd-block-2051e43b-6678-567a-85ad-b7e1187d21ae', 'vg_name': 'ceph-2051e43b-6678-567a-85ad-b7e1187d21ae'}) 2026-01-17 00:45:16.284726 | orchestrator | ok: [testbed-node-3] => (item={'lv_name': 'osd-block-c5f49b22-d40f-5ab7-98f7-9762e23da2c0', 'vg_name': 'ceph-c5f49b22-d40f-5ab7-98f7-9762e23da2c0'}) 2026-01-17 00:45:16.284731 | orchestrator | 2026-01-17 00:45:16.284737 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2026-01-17 00:45:16.284742 | orchestrator | Saturday 17 January 2026 00:45:12 +0000 (0:00:00.229) 0:00:25.655 ****** 2026-01-17 00:45:16.284776 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-c5f49b22-d40f-5ab7-98f7-9762e23da2c0', 'data_vg': 'ceph-c5f49b22-d40f-5ab7-98f7-9762e23da2c0'})  2026-01-17 00:45:16.284786 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-2051e43b-6678-567a-85ad-b7e1187d21ae', 'data_vg': 'ceph-2051e43b-6678-567a-85ad-b7e1187d21ae'})  2026-01-17 00:45:16.284795 | orchestrator | skipping: [testbed-node-3] 2026-01-17 00:45:16.284804 | orchestrator | 2026-01-17 00:45:16.284813 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2026-01-17 00:45:16.284823 | orchestrator | Saturday 17 January 2026 00:45:12 +0000 (0:00:00.380) 0:00:26.036 ****** 2026-01-17 00:45:16.284832 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-c5f49b22-d40f-5ab7-98f7-9762e23da2c0', 'data_vg': 'ceph-c5f49b22-d40f-5ab7-98f7-9762e23da2c0'})  2026-01-17 00:45:16.284842 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-2051e43b-6678-567a-85ad-b7e1187d21ae', 'data_vg': 'ceph-2051e43b-6678-567a-85ad-b7e1187d21ae'})  2026-01-17 00:45:16.284847 | orchestrator | skipping: [testbed-node-3] 2026-01-17 00:45:16.284853 | orchestrator | 2026-01-17 00:45:16.284859 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2026-01-17 00:45:16.284864 | orchestrator | Saturday 17 January 2026 00:45:13 +0000 (0:00:00.180) 0:00:26.216 ****** 2026-01-17 00:45:16.284869 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-c5f49b22-d40f-5ab7-98f7-9762e23da2c0', 'data_vg': 'ceph-c5f49b22-d40f-5ab7-98f7-9762e23da2c0'})  2026-01-17 00:45:16.284875 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-2051e43b-6678-567a-85ad-b7e1187d21ae', 'data_vg': 'ceph-2051e43b-6678-567a-85ad-b7e1187d21ae'})  2026-01-17 00:45:16.284880 | orchestrator | skipping: [testbed-node-3] 2026-01-17 00:45:16.284886 | orchestrator | 2026-01-17 00:45:16.284891 | orchestrator | TASK [Print LVM report data] *************************************************** 2026-01-17 00:45:16.284897 | orchestrator | Saturday 17 January 2026 00:45:13 +0000 (0:00:00.167) 0:00:26.384 ****** 2026-01-17 00:45:16.284914 | orchestrator | ok: [testbed-node-3] => { 2026-01-17 00:45:16.284920 | orchestrator |  "lvm_report": { 2026-01-17 00:45:16.284926 | orchestrator |  "lv": [ 2026-01-17 00:45:16.284932 | orchestrator |  { 2026-01-17 00:45:16.284937 | orchestrator |  "lv_name": "osd-block-2051e43b-6678-567a-85ad-b7e1187d21ae", 2026-01-17 00:45:16.284944 | orchestrator |  "vg_name": "ceph-2051e43b-6678-567a-85ad-b7e1187d21ae" 2026-01-17 00:45:16.284949 | orchestrator |  }, 2026-01-17 00:45:16.284954 | orchestrator |  { 2026-01-17 00:45:16.284960 | orchestrator |  "lv_name": "osd-block-c5f49b22-d40f-5ab7-98f7-9762e23da2c0", 2026-01-17 00:45:16.284965 | orchestrator |  "vg_name": "ceph-c5f49b22-d40f-5ab7-98f7-9762e23da2c0" 2026-01-17 00:45:16.284970 | orchestrator |  } 2026-01-17 00:45:16.284976 | orchestrator |  ], 2026-01-17 00:45:16.284981 | orchestrator |  "pv": [ 2026-01-17 00:45:16.284986 | orchestrator |  { 2026-01-17 00:45:16.284992 | orchestrator |  "pv_name": "/dev/sdb", 2026-01-17 00:45:16.284997 | orchestrator |  "vg_name": "ceph-c5f49b22-d40f-5ab7-98f7-9762e23da2c0" 2026-01-17 00:45:16.285002 | orchestrator |  }, 2026-01-17 00:45:16.285008 | orchestrator |  { 2026-01-17 00:45:16.285013 | orchestrator |  "pv_name": "/dev/sdc", 2026-01-17 00:45:16.285020 | orchestrator |  "vg_name": "ceph-2051e43b-6678-567a-85ad-b7e1187d21ae" 2026-01-17 00:45:16.285029 | orchestrator |  } 2026-01-17 00:45:16.285038 | orchestrator |  ] 2026-01-17 00:45:16.285049 | orchestrator |  } 2026-01-17 00:45:16.285058 | orchestrator | } 2026-01-17 00:45:16.285068 | orchestrator | 2026-01-17 00:45:16.285074 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2026-01-17 00:45:16.285080 | orchestrator | 2026-01-17 00:45:16.285087 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-01-17 00:45:16.285098 | orchestrator | Saturday 17 January 2026 00:45:13 +0000 (0:00:00.307) 0:00:26.691 ****** 2026-01-17 00:45:16.285104 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2026-01-17 00:45:16.285110 | orchestrator | 2026-01-17 00:45:16.285116 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-01-17 00:45:16.285123 | orchestrator | Saturday 17 January 2026 00:45:13 +0000 (0:00:00.241) 0:00:26.932 ****** 2026-01-17 00:45:16.285129 | orchestrator | ok: [testbed-node-4] 2026-01-17 00:45:16.285135 | orchestrator | 2026-01-17 00:45:16.285141 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-17 00:45:16.285147 | orchestrator | Saturday 17 January 2026 00:45:14 +0000 (0:00:00.247) 0:00:27.180 ****** 2026-01-17 00:45:16.285153 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop0) 2026-01-17 00:45:16.285159 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop1) 2026-01-17 00:45:16.285169 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop2) 2026-01-17 00:45:16.285178 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop3) 2026-01-17 00:45:16.285186 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop4) 2026-01-17 00:45:16.285195 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop5) 2026-01-17 00:45:16.285310 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop6) 2026-01-17 00:45:16.285331 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop7) 2026-01-17 00:45:16.285337 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sda) 2026-01-17 00:45:16.285342 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdb) 2026-01-17 00:45:16.285347 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdc) 2026-01-17 00:45:16.285353 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdd) 2026-01-17 00:45:16.285358 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sr0) 2026-01-17 00:45:16.285364 | orchestrator | 2026-01-17 00:45:16.285369 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-17 00:45:16.285375 | orchestrator | Saturday 17 January 2026 00:45:14 +0000 (0:00:00.457) 0:00:27.637 ****** 2026-01-17 00:45:16.285380 | orchestrator | skipping: [testbed-node-4] 2026-01-17 00:45:16.285386 | orchestrator | 2026-01-17 00:45:16.285391 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-17 00:45:16.285397 | orchestrator | Saturday 17 January 2026 00:45:14 +0000 (0:00:00.214) 0:00:27.852 ****** 2026-01-17 00:45:16.285402 | orchestrator | skipping: [testbed-node-4] 2026-01-17 00:45:16.285407 | orchestrator | 2026-01-17 00:45:16.285413 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-17 00:45:16.285418 | orchestrator | Saturday 17 January 2026 00:45:14 +0000 (0:00:00.204) 0:00:28.056 ****** 2026-01-17 00:45:16.285424 | orchestrator | skipping: [testbed-node-4] 2026-01-17 00:45:16.285429 | orchestrator | 2026-01-17 00:45:16.285434 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-17 00:45:16.285440 | orchestrator | Saturday 17 January 2026 00:45:15 +0000 (0:00:00.691) 0:00:28.748 ****** 2026-01-17 00:45:16.285445 | orchestrator | skipping: [testbed-node-4] 2026-01-17 00:45:16.285451 | orchestrator | 2026-01-17 00:45:16.285456 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-17 00:45:16.285461 | orchestrator | Saturday 17 January 2026 00:45:15 +0000 (0:00:00.228) 0:00:28.976 ****** 2026-01-17 00:45:16.285467 | orchestrator | skipping: [testbed-node-4] 2026-01-17 00:45:16.285472 | orchestrator | 2026-01-17 00:45:16.285478 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-17 00:45:16.285491 | orchestrator | Saturday 17 January 2026 00:45:16 +0000 (0:00:00.226) 0:00:29.202 ****** 2026-01-17 00:45:16.285496 | orchestrator | skipping: [testbed-node-4] 2026-01-17 00:45:16.285502 | orchestrator | 2026-01-17 00:45:16.285514 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-17 00:45:28.428106 | orchestrator | Saturday 17 January 2026 00:45:16 +0000 (0:00:00.207) 0:00:29.410 ****** 2026-01-17 00:45:28.428216 | orchestrator | skipping: [testbed-node-4] 2026-01-17 00:45:28.428282 | orchestrator | 2026-01-17 00:45:28.428292 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-17 00:45:28.428300 | orchestrator | Saturday 17 January 2026 00:45:16 +0000 (0:00:00.229) 0:00:29.639 ****** 2026-01-17 00:45:28.428308 | orchestrator | skipping: [testbed-node-4] 2026-01-17 00:45:28.428315 | orchestrator | 2026-01-17 00:45:28.428323 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-17 00:45:28.428330 | orchestrator | Saturday 17 January 2026 00:45:16 +0000 (0:00:00.219) 0:00:29.859 ****** 2026-01-17 00:45:28.428338 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_40676af1-bb63-41c0-bff5-9ddc0a326d9b) 2026-01-17 00:45:28.428346 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_40676af1-bb63-41c0-bff5-9ddc0a326d9b) 2026-01-17 00:45:28.428354 | orchestrator | 2026-01-17 00:45:28.428361 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-17 00:45:28.428368 | orchestrator | Saturday 17 January 2026 00:45:17 +0000 (0:00:00.452) 0:00:30.312 ****** 2026-01-17 00:45:28.428375 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_89953a4d-629d-4187-87cb-8eaa4172afa2) 2026-01-17 00:45:28.428382 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_89953a4d-629d-4187-87cb-8eaa4172afa2) 2026-01-17 00:45:28.428389 | orchestrator | 2026-01-17 00:45:28.428396 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-17 00:45:28.428404 | orchestrator | Saturday 17 January 2026 00:45:17 +0000 (0:00:00.433) 0:00:30.745 ****** 2026-01-17 00:45:28.428411 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_bd9e2794-f462-41d3-bb22-ac4c4b73281f) 2026-01-17 00:45:28.428418 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_bd9e2794-f462-41d3-bb22-ac4c4b73281f) 2026-01-17 00:45:28.428425 | orchestrator | 2026-01-17 00:45:28.428432 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-17 00:45:28.428439 | orchestrator | Saturday 17 January 2026 00:45:18 +0000 (0:00:00.452) 0:00:31.198 ****** 2026-01-17 00:45:28.428446 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_1215eb05-d4be-4bfd-8c82-e464703dc233) 2026-01-17 00:45:28.428453 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_1215eb05-d4be-4bfd-8c82-e464703dc233) 2026-01-17 00:45:28.428460 | orchestrator | 2026-01-17 00:45:28.428468 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-17 00:45:28.428475 | orchestrator | Saturday 17 January 2026 00:45:18 +0000 (0:00:00.663) 0:00:31.861 ****** 2026-01-17 00:45:28.428482 | orchestrator | ok: [testbed-node-4] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-01-17 00:45:28.428489 | orchestrator | 2026-01-17 00:45:28.428497 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-17 00:45:28.428509 | orchestrator | Saturday 17 January 2026 00:45:19 +0000 (0:00:00.607) 0:00:32.469 ****** 2026-01-17 00:45:28.428522 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop0) 2026-01-17 00:45:28.428535 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop1) 2026-01-17 00:45:28.428547 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop2) 2026-01-17 00:45:28.428558 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop3) 2026-01-17 00:45:28.428570 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop4) 2026-01-17 00:45:28.428631 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop5) 2026-01-17 00:45:28.428648 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop6) 2026-01-17 00:45:28.428660 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop7) 2026-01-17 00:45:28.428674 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sda) 2026-01-17 00:45:28.428688 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdb) 2026-01-17 00:45:28.428700 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdc) 2026-01-17 00:45:28.428714 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdd) 2026-01-17 00:45:28.428724 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sr0) 2026-01-17 00:45:28.428737 | orchestrator | 2026-01-17 00:45:28.428748 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-17 00:45:28.428760 | orchestrator | Saturday 17 January 2026 00:45:20 +0000 (0:00:00.962) 0:00:33.432 ****** 2026-01-17 00:45:28.428772 | orchestrator | skipping: [testbed-node-4] 2026-01-17 00:45:28.428785 | orchestrator | 2026-01-17 00:45:28.428797 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-17 00:45:28.428810 | orchestrator | Saturday 17 January 2026 00:45:20 +0000 (0:00:00.205) 0:00:33.638 ****** 2026-01-17 00:45:28.428823 | orchestrator | skipping: [testbed-node-4] 2026-01-17 00:45:28.428835 | orchestrator | 2026-01-17 00:45:28.428844 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-17 00:45:28.428852 | orchestrator | Saturday 17 January 2026 00:45:20 +0000 (0:00:00.208) 0:00:33.846 ****** 2026-01-17 00:45:28.428859 | orchestrator | skipping: [testbed-node-4] 2026-01-17 00:45:28.428866 | orchestrator | 2026-01-17 00:45:28.428890 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-17 00:45:28.428903 | orchestrator | Saturday 17 January 2026 00:45:20 +0000 (0:00:00.226) 0:00:34.073 ****** 2026-01-17 00:45:28.428915 | orchestrator | skipping: [testbed-node-4] 2026-01-17 00:45:28.428927 | orchestrator | 2026-01-17 00:45:28.428938 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-17 00:45:28.428950 | orchestrator | Saturday 17 January 2026 00:45:21 +0000 (0:00:00.221) 0:00:34.294 ****** 2026-01-17 00:45:28.428962 | orchestrator | skipping: [testbed-node-4] 2026-01-17 00:45:28.428973 | orchestrator | 2026-01-17 00:45:28.428985 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-17 00:45:28.428997 | orchestrator | Saturday 17 January 2026 00:45:21 +0000 (0:00:00.200) 0:00:34.495 ****** 2026-01-17 00:45:28.429009 | orchestrator | skipping: [testbed-node-4] 2026-01-17 00:45:28.429022 | orchestrator | 2026-01-17 00:45:28.429035 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-17 00:45:28.429047 | orchestrator | Saturday 17 January 2026 00:45:21 +0000 (0:00:00.212) 0:00:34.708 ****** 2026-01-17 00:45:28.429059 | orchestrator | skipping: [testbed-node-4] 2026-01-17 00:45:28.429072 | orchestrator | 2026-01-17 00:45:28.429084 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-17 00:45:28.429097 | orchestrator | Saturday 17 January 2026 00:45:21 +0000 (0:00:00.211) 0:00:34.920 ****** 2026-01-17 00:45:28.429110 | orchestrator | skipping: [testbed-node-4] 2026-01-17 00:45:28.429121 | orchestrator | 2026-01-17 00:45:28.429135 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-17 00:45:28.429146 | orchestrator | Saturday 17 January 2026 00:45:22 +0000 (0:00:00.212) 0:00:35.132 ****** 2026-01-17 00:45:28.429159 | orchestrator | ok: [testbed-node-4] => (item=sda1) 2026-01-17 00:45:28.429171 | orchestrator | ok: [testbed-node-4] => (item=sda14) 2026-01-17 00:45:28.429184 | orchestrator | ok: [testbed-node-4] => (item=sda15) 2026-01-17 00:45:28.429197 | orchestrator | ok: [testbed-node-4] => (item=sda16) 2026-01-17 00:45:28.429239 | orchestrator | 2026-01-17 00:45:28.429252 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-17 00:45:28.429259 | orchestrator | Saturday 17 January 2026 00:45:22 +0000 (0:00:00.965) 0:00:36.098 ****** 2026-01-17 00:45:28.429267 | orchestrator | skipping: [testbed-node-4] 2026-01-17 00:45:28.429274 | orchestrator | 2026-01-17 00:45:28.429281 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-17 00:45:28.429288 | orchestrator | Saturday 17 January 2026 00:45:23 +0000 (0:00:00.240) 0:00:36.338 ****** 2026-01-17 00:45:28.429295 | orchestrator | skipping: [testbed-node-4] 2026-01-17 00:45:28.429302 | orchestrator | 2026-01-17 00:45:28.429310 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-17 00:45:28.429317 | orchestrator | Saturday 17 January 2026 00:45:24 +0000 (0:00:00.883) 0:00:37.222 ****** 2026-01-17 00:45:28.429324 | orchestrator | skipping: [testbed-node-4] 2026-01-17 00:45:28.429331 | orchestrator | 2026-01-17 00:45:28.429342 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-17 00:45:28.429354 | orchestrator | Saturday 17 January 2026 00:45:24 +0000 (0:00:00.223) 0:00:37.447 ****** 2026-01-17 00:45:28.429364 | orchestrator | skipping: [testbed-node-4] 2026-01-17 00:45:28.429376 | orchestrator | 2026-01-17 00:45:28.429389 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2026-01-17 00:45:28.429408 | orchestrator | Saturday 17 January 2026 00:45:24 +0000 (0:00:00.248) 0:00:37.695 ****** 2026-01-17 00:45:28.429418 | orchestrator | skipping: [testbed-node-4] 2026-01-17 00:45:28.429425 | orchestrator | 2026-01-17 00:45:28.429432 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2026-01-17 00:45:28.429440 | orchestrator | Saturday 17 January 2026 00:45:24 +0000 (0:00:00.187) 0:00:37.883 ****** 2026-01-17 00:45:28.429447 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '6f2a493f-ee42-5e89-bc68-fb4f7dc1b165'}}) 2026-01-17 00:45:28.429455 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'fbc9b557-fafa-5136-b4c6-7d286dd557bb'}}) 2026-01-17 00:45:28.429462 | orchestrator | 2026-01-17 00:45:28.429469 | orchestrator | TASK [Create block VGs] ******************************************************** 2026-01-17 00:45:28.429476 | orchestrator | Saturday 17 January 2026 00:45:24 +0000 (0:00:00.214) 0:00:38.098 ****** 2026-01-17 00:45:28.429485 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-6f2a493f-ee42-5e89-bc68-fb4f7dc1b165', 'data_vg': 'ceph-6f2a493f-ee42-5e89-bc68-fb4f7dc1b165'}) 2026-01-17 00:45:28.429494 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-fbc9b557-fafa-5136-b4c6-7d286dd557bb', 'data_vg': 'ceph-fbc9b557-fafa-5136-b4c6-7d286dd557bb'}) 2026-01-17 00:45:28.429501 | orchestrator | 2026-01-17 00:45:28.429509 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2026-01-17 00:45:28.429516 | orchestrator | Saturday 17 January 2026 00:45:26 +0000 (0:00:01.960) 0:00:40.058 ****** 2026-01-17 00:45:28.429523 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-6f2a493f-ee42-5e89-bc68-fb4f7dc1b165', 'data_vg': 'ceph-6f2a493f-ee42-5e89-bc68-fb4f7dc1b165'})  2026-01-17 00:45:28.429532 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-fbc9b557-fafa-5136-b4c6-7d286dd557bb', 'data_vg': 'ceph-fbc9b557-fafa-5136-b4c6-7d286dd557bb'})  2026-01-17 00:45:28.429539 | orchestrator | skipping: [testbed-node-4] 2026-01-17 00:45:28.429546 | orchestrator | 2026-01-17 00:45:28.429553 | orchestrator | TASK [Create block LVs] ******************************************************** 2026-01-17 00:45:28.429561 | orchestrator | Saturday 17 January 2026 00:45:27 +0000 (0:00:00.172) 0:00:40.231 ****** 2026-01-17 00:45:28.429568 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-6f2a493f-ee42-5e89-bc68-fb4f7dc1b165', 'data_vg': 'ceph-6f2a493f-ee42-5e89-bc68-fb4f7dc1b165'}) 2026-01-17 00:45:28.429584 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-fbc9b557-fafa-5136-b4c6-7d286dd557bb', 'data_vg': 'ceph-fbc9b557-fafa-5136-b4c6-7d286dd557bb'}) 2026-01-17 00:45:34.189689 | orchestrator | 2026-01-17 00:45:34.189822 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2026-01-17 00:45:34.189841 | orchestrator | Saturday 17 January 2026 00:45:28 +0000 (0:00:01.320) 0:00:41.552 ****** 2026-01-17 00:45:34.189854 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-6f2a493f-ee42-5e89-bc68-fb4f7dc1b165', 'data_vg': 'ceph-6f2a493f-ee42-5e89-bc68-fb4f7dc1b165'})  2026-01-17 00:45:34.189867 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-fbc9b557-fafa-5136-b4c6-7d286dd557bb', 'data_vg': 'ceph-fbc9b557-fafa-5136-b4c6-7d286dd557bb'})  2026-01-17 00:45:34.189878 | orchestrator | skipping: [testbed-node-4] 2026-01-17 00:45:34.189889 | orchestrator | 2026-01-17 00:45:34.189901 | orchestrator | TASK [Create DB VGs] *********************************************************** 2026-01-17 00:45:34.189912 | orchestrator | Saturday 17 January 2026 00:45:28 +0000 (0:00:00.159) 0:00:41.712 ****** 2026-01-17 00:45:34.189923 | orchestrator | skipping: [testbed-node-4] 2026-01-17 00:45:34.189933 | orchestrator | 2026-01-17 00:45:34.189944 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2026-01-17 00:45:34.189955 | orchestrator | Saturday 17 January 2026 00:45:28 +0000 (0:00:00.144) 0:00:41.857 ****** 2026-01-17 00:45:34.189966 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-6f2a493f-ee42-5e89-bc68-fb4f7dc1b165', 'data_vg': 'ceph-6f2a493f-ee42-5e89-bc68-fb4f7dc1b165'})  2026-01-17 00:45:34.189977 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-fbc9b557-fafa-5136-b4c6-7d286dd557bb', 'data_vg': 'ceph-fbc9b557-fafa-5136-b4c6-7d286dd557bb'})  2026-01-17 00:45:34.189988 | orchestrator | skipping: [testbed-node-4] 2026-01-17 00:45:34.189998 | orchestrator | 2026-01-17 00:45:34.190009 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2026-01-17 00:45:34.190077 | orchestrator | Saturday 17 January 2026 00:45:28 +0000 (0:00:00.147) 0:00:42.005 ****** 2026-01-17 00:45:34.190089 | orchestrator | skipping: [testbed-node-4] 2026-01-17 00:45:34.190100 | orchestrator | 2026-01-17 00:45:34.190111 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2026-01-17 00:45:34.190121 | orchestrator | Saturday 17 January 2026 00:45:29 +0000 (0:00:00.136) 0:00:42.142 ****** 2026-01-17 00:45:34.190132 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-6f2a493f-ee42-5e89-bc68-fb4f7dc1b165', 'data_vg': 'ceph-6f2a493f-ee42-5e89-bc68-fb4f7dc1b165'})  2026-01-17 00:45:34.190143 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-fbc9b557-fafa-5136-b4c6-7d286dd557bb', 'data_vg': 'ceph-fbc9b557-fafa-5136-b4c6-7d286dd557bb'})  2026-01-17 00:45:34.190154 | orchestrator | skipping: [testbed-node-4] 2026-01-17 00:45:34.190165 | orchestrator | 2026-01-17 00:45:34.190175 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2026-01-17 00:45:34.190204 | orchestrator | Saturday 17 January 2026 00:45:29 +0000 (0:00:00.378) 0:00:42.520 ****** 2026-01-17 00:45:34.190216 | orchestrator | skipping: [testbed-node-4] 2026-01-17 00:45:34.190254 | orchestrator | 2026-01-17 00:45:34.190266 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2026-01-17 00:45:34.190277 | orchestrator | Saturday 17 January 2026 00:45:29 +0000 (0:00:00.164) 0:00:42.685 ****** 2026-01-17 00:45:34.190288 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-6f2a493f-ee42-5e89-bc68-fb4f7dc1b165', 'data_vg': 'ceph-6f2a493f-ee42-5e89-bc68-fb4f7dc1b165'})  2026-01-17 00:45:34.190299 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-fbc9b557-fafa-5136-b4c6-7d286dd557bb', 'data_vg': 'ceph-fbc9b557-fafa-5136-b4c6-7d286dd557bb'})  2026-01-17 00:45:34.190310 | orchestrator | skipping: [testbed-node-4] 2026-01-17 00:45:34.190320 | orchestrator | 2026-01-17 00:45:34.190331 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2026-01-17 00:45:34.190342 | orchestrator | Saturday 17 January 2026 00:45:29 +0000 (0:00:00.149) 0:00:42.834 ****** 2026-01-17 00:45:34.190366 | orchestrator | ok: [testbed-node-4] 2026-01-17 00:45:34.190414 | orchestrator | 2026-01-17 00:45:34.190427 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2026-01-17 00:45:34.190438 | orchestrator | Saturday 17 January 2026 00:45:29 +0000 (0:00:00.146) 0:00:42.981 ****** 2026-01-17 00:45:34.190449 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-6f2a493f-ee42-5e89-bc68-fb4f7dc1b165', 'data_vg': 'ceph-6f2a493f-ee42-5e89-bc68-fb4f7dc1b165'})  2026-01-17 00:45:34.190460 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-fbc9b557-fafa-5136-b4c6-7d286dd557bb', 'data_vg': 'ceph-fbc9b557-fafa-5136-b4c6-7d286dd557bb'})  2026-01-17 00:45:34.190470 | orchestrator | skipping: [testbed-node-4] 2026-01-17 00:45:34.190481 | orchestrator | 2026-01-17 00:45:34.190492 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2026-01-17 00:45:34.190503 | orchestrator | Saturday 17 January 2026 00:45:30 +0000 (0:00:00.161) 0:00:43.142 ****** 2026-01-17 00:45:34.190513 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-6f2a493f-ee42-5e89-bc68-fb4f7dc1b165', 'data_vg': 'ceph-6f2a493f-ee42-5e89-bc68-fb4f7dc1b165'})  2026-01-17 00:45:34.190524 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-fbc9b557-fafa-5136-b4c6-7d286dd557bb', 'data_vg': 'ceph-fbc9b557-fafa-5136-b4c6-7d286dd557bb'})  2026-01-17 00:45:34.190535 | orchestrator | skipping: [testbed-node-4] 2026-01-17 00:45:34.190546 | orchestrator | 2026-01-17 00:45:34.190556 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2026-01-17 00:45:34.190585 | orchestrator | Saturday 17 January 2026 00:45:30 +0000 (0:00:00.165) 0:00:43.308 ****** 2026-01-17 00:45:34.190596 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-6f2a493f-ee42-5e89-bc68-fb4f7dc1b165', 'data_vg': 'ceph-6f2a493f-ee42-5e89-bc68-fb4f7dc1b165'})  2026-01-17 00:45:34.190608 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-fbc9b557-fafa-5136-b4c6-7d286dd557bb', 'data_vg': 'ceph-fbc9b557-fafa-5136-b4c6-7d286dd557bb'})  2026-01-17 00:45:34.190619 | orchestrator | skipping: [testbed-node-4] 2026-01-17 00:45:34.190629 | orchestrator | 2026-01-17 00:45:34.190640 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2026-01-17 00:45:34.190651 | orchestrator | Saturday 17 January 2026 00:45:30 +0000 (0:00:00.153) 0:00:43.461 ****** 2026-01-17 00:45:34.190662 | orchestrator | skipping: [testbed-node-4] 2026-01-17 00:45:34.190673 | orchestrator | 2026-01-17 00:45:34.190684 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2026-01-17 00:45:34.190694 | orchestrator | Saturday 17 January 2026 00:45:30 +0000 (0:00:00.134) 0:00:43.596 ****** 2026-01-17 00:45:34.190705 | orchestrator | skipping: [testbed-node-4] 2026-01-17 00:45:34.190715 | orchestrator | 2026-01-17 00:45:34.190726 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2026-01-17 00:45:34.190737 | orchestrator | Saturday 17 January 2026 00:45:30 +0000 (0:00:00.151) 0:00:43.748 ****** 2026-01-17 00:45:34.190747 | orchestrator | skipping: [testbed-node-4] 2026-01-17 00:45:34.190758 | orchestrator | 2026-01-17 00:45:34.190768 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2026-01-17 00:45:34.190779 | orchestrator | Saturday 17 January 2026 00:45:30 +0000 (0:00:00.143) 0:00:43.891 ****** 2026-01-17 00:45:34.190790 | orchestrator | ok: [testbed-node-4] => { 2026-01-17 00:45:34.190800 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2026-01-17 00:45:34.190812 | orchestrator | } 2026-01-17 00:45:34.190822 | orchestrator | 2026-01-17 00:45:34.190833 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2026-01-17 00:45:34.190844 | orchestrator | Saturday 17 January 2026 00:45:30 +0000 (0:00:00.144) 0:00:44.035 ****** 2026-01-17 00:45:34.190854 | orchestrator | ok: [testbed-node-4] => { 2026-01-17 00:45:34.190865 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2026-01-17 00:45:34.190876 | orchestrator | } 2026-01-17 00:45:34.190886 | orchestrator | 2026-01-17 00:45:34.190897 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2026-01-17 00:45:34.190907 | orchestrator | Saturday 17 January 2026 00:45:31 +0000 (0:00:00.149) 0:00:44.184 ****** 2026-01-17 00:45:34.190926 | orchestrator | ok: [testbed-node-4] => { 2026-01-17 00:45:34.190937 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2026-01-17 00:45:34.190947 | orchestrator | } 2026-01-17 00:45:34.190958 | orchestrator | 2026-01-17 00:45:34.190968 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2026-01-17 00:45:34.190979 | orchestrator | Saturday 17 January 2026 00:45:31 +0000 (0:00:00.372) 0:00:44.557 ****** 2026-01-17 00:45:34.190989 | orchestrator | ok: [testbed-node-4] 2026-01-17 00:45:34.191000 | orchestrator | 2026-01-17 00:45:34.191011 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2026-01-17 00:45:34.191022 | orchestrator | Saturday 17 January 2026 00:45:31 +0000 (0:00:00.534) 0:00:45.091 ****** 2026-01-17 00:45:34.191032 | orchestrator | ok: [testbed-node-4] 2026-01-17 00:45:34.191043 | orchestrator | 2026-01-17 00:45:34.191054 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2026-01-17 00:45:34.191064 | orchestrator | Saturday 17 January 2026 00:45:32 +0000 (0:00:00.545) 0:00:45.637 ****** 2026-01-17 00:45:34.191075 | orchestrator | ok: [testbed-node-4] 2026-01-17 00:45:34.191085 | orchestrator | 2026-01-17 00:45:34.191096 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2026-01-17 00:45:34.191106 | orchestrator | Saturday 17 January 2026 00:45:33 +0000 (0:00:00.529) 0:00:46.167 ****** 2026-01-17 00:45:34.191117 | orchestrator | ok: [testbed-node-4] 2026-01-17 00:45:34.191128 | orchestrator | 2026-01-17 00:45:34.191138 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2026-01-17 00:45:34.191149 | orchestrator | Saturday 17 January 2026 00:45:33 +0000 (0:00:00.154) 0:00:46.322 ****** 2026-01-17 00:45:34.191159 | orchestrator | skipping: [testbed-node-4] 2026-01-17 00:45:34.191170 | orchestrator | 2026-01-17 00:45:34.191188 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2026-01-17 00:45:34.191200 | orchestrator | Saturday 17 January 2026 00:45:33 +0000 (0:00:00.108) 0:00:46.431 ****** 2026-01-17 00:45:34.191210 | orchestrator | skipping: [testbed-node-4] 2026-01-17 00:45:34.191221 | orchestrator | 2026-01-17 00:45:34.191256 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2026-01-17 00:45:34.191267 | orchestrator | Saturday 17 January 2026 00:45:33 +0000 (0:00:00.118) 0:00:46.549 ****** 2026-01-17 00:45:34.191278 | orchestrator | ok: [testbed-node-4] => { 2026-01-17 00:45:34.191289 | orchestrator |  "vgs_report": { 2026-01-17 00:45:34.191301 | orchestrator |  "vg": [] 2026-01-17 00:45:34.191312 | orchestrator |  } 2026-01-17 00:45:34.191323 | orchestrator | } 2026-01-17 00:45:34.191334 | orchestrator | 2026-01-17 00:45:34.191344 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2026-01-17 00:45:34.191355 | orchestrator | Saturday 17 January 2026 00:45:33 +0000 (0:00:00.172) 0:00:46.722 ****** 2026-01-17 00:45:34.191366 | orchestrator | skipping: [testbed-node-4] 2026-01-17 00:45:34.191377 | orchestrator | 2026-01-17 00:45:34.191387 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2026-01-17 00:45:34.191399 | orchestrator | Saturday 17 January 2026 00:45:33 +0000 (0:00:00.125) 0:00:46.847 ****** 2026-01-17 00:45:34.191409 | orchestrator | skipping: [testbed-node-4] 2026-01-17 00:45:34.191420 | orchestrator | 2026-01-17 00:45:34.191431 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2026-01-17 00:45:34.191442 | orchestrator | Saturday 17 January 2026 00:45:33 +0000 (0:00:00.165) 0:00:47.012 ****** 2026-01-17 00:45:34.191453 | orchestrator | skipping: [testbed-node-4] 2026-01-17 00:45:34.191463 | orchestrator | 2026-01-17 00:45:34.191474 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2026-01-17 00:45:34.191485 | orchestrator | Saturday 17 January 2026 00:45:34 +0000 (0:00:00.153) 0:00:47.165 ****** 2026-01-17 00:45:34.191496 | orchestrator | skipping: [testbed-node-4] 2026-01-17 00:45:34.191507 | orchestrator | 2026-01-17 00:45:34.191525 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2026-01-17 00:45:39.134607 | orchestrator | Saturday 17 January 2026 00:45:34 +0000 (0:00:00.148) 0:00:47.314 ****** 2026-01-17 00:45:39.134758 | orchestrator | skipping: [testbed-node-4] 2026-01-17 00:45:39.134791 | orchestrator | 2026-01-17 00:45:39.134805 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2026-01-17 00:45:39.134817 | orchestrator | Saturday 17 January 2026 00:45:34 +0000 (0:00:00.359) 0:00:47.673 ****** 2026-01-17 00:45:39.134828 | orchestrator | skipping: [testbed-node-4] 2026-01-17 00:45:39.134839 | orchestrator | 2026-01-17 00:45:39.134850 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2026-01-17 00:45:39.134861 | orchestrator | Saturday 17 January 2026 00:45:34 +0000 (0:00:00.139) 0:00:47.812 ****** 2026-01-17 00:45:39.134871 | orchestrator | skipping: [testbed-node-4] 2026-01-17 00:45:39.134882 | orchestrator | 2026-01-17 00:45:39.134893 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2026-01-17 00:45:39.134904 | orchestrator | Saturday 17 January 2026 00:45:34 +0000 (0:00:00.137) 0:00:47.949 ****** 2026-01-17 00:45:39.134914 | orchestrator | skipping: [testbed-node-4] 2026-01-17 00:45:39.134925 | orchestrator | 2026-01-17 00:45:39.134935 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2026-01-17 00:45:39.134946 | orchestrator | Saturday 17 January 2026 00:45:34 +0000 (0:00:00.146) 0:00:48.096 ****** 2026-01-17 00:45:39.134957 | orchestrator | skipping: [testbed-node-4] 2026-01-17 00:45:39.134967 | orchestrator | 2026-01-17 00:45:39.134978 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2026-01-17 00:45:39.134988 | orchestrator | Saturday 17 January 2026 00:45:35 +0000 (0:00:00.141) 0:00:48.237 ****** 2026-01-17 00:45:39.134999 | orchestrator | skipping: [testbed-node-4] 2026-01-17 00:45:39.135010 | orchestrator | 2026-01-17 00:45:39.135021 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2026-01-17 00:45:39.135032 | orchestrator | Saturday 17 January 2026 00:45:35 +0000 (0:00:00.148) 0:00:48.386 ****** 2026-01-17 00:45:39.135042 | orchestrator | skipping: [testbed-node-4] 2026-01-17 00:45:39.135053 | orchestrator | 2026-01-17 00:45:39.135064 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2026-01-17 00:45:39.135074 | orchestrator | Saturday 17 January 2026 00:45:35 +0000 (0:00:00.138) 0:00:48.525 ****** 2026-01-17 00:45:39.135085 | orchestrator | skipping: [testbed-node-4] 2026-01-17 00:45:39.135096 | orchestrator | 2026-01-17 00:45:39.135106 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2026-01-17 00:45:39.135117 | orchestrator | Saturday 17 January 2026 00:45:35 +0000 (0:00:00.134) 0:00:48.659 ****** 2026-01-17 00:45:39.135128 | orchestrator | skipping: [testbed-node-4] 2026-01-17 00:45:39.135141 | orchestrator | 2026-01-17 00:45:39.135153 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2026-01-17 00:45:39.135165 | orchestrator | Saturday 17 January 2026 00:45:35 +0000 (0:00:00.153) 0:00:48.813 ****** 2026-01-17 00:45:39.135177 | orchestrator | skipping: [testbed-node-4] 2026-01-17 00:45:39.135190 | orchestrator | 2026-01-17 00:45:39.135203 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2026-01-17 00:45:39.135257 | orchestrator | Saturday 17 January 2026 00:45:35 +0000 (0:00:00.127) 0:00:48.940 ****** 2026-01-17 00:45:39.135273 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-6f2a493f-ee42-5e89-bc68-fb4f7dc1b165', 'data_vg': 'ceph-6f2a493f-ee42-5e89-bc68-fb4f7dc1b165'})  2026-01-17 00:45:39.135288 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-fbc9b557-fafa-5136-b4c6-7d286dd557bb', 'data_vg': 'ceph-fbc9b557-fafa-5136-b4c6-7d286dd557bb'})  2026-01-17 00:45:39.135300 | orchestrator | skipping: [testbed-node-4] 2026-01-17 00:45:39.135312 | orchestrator | 2026-01-17 00:45:39.135324 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2026-01-17 00:45:39.135335 | orchestrator | Saturday 17 January 2026 00:45:35 +0000 (0:00:00.156) 0:00:49.097 ****** 2026-01-17 00:45:39.135345 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-6f2a493f-ee42-5e89-bc68-fb4f7dc1b165', 'data_vg': 'ceph-6f2a493f-ee42-5e89-bc68-fb4f7dc1b165'})  2026-01-17 00:45:39.135367 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-fbc9b557-fafa-5136-b4c6-7d286dd557bb', 'data_vg': 'ceph-fbc9b557-fafa-5136-b4c6-7d286dd557bb'})  2026-01-17 00:45:39.135377 | orchestrator | skipping: [testbed-node-4] 2026-01-17 00:45:39.135388 | orchestrator | 2026-01-17 00:45:39.135398 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2026-01-17 00:45:39.135409 | orchestrator | Saturday 17 January 2026 00:45:36 +0000 (0:00:00.155) 0:00:49.253 ****** 2026-01-17 00:45:39.135419 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-6f2a493f-ee42-5e89-bc68-fb4f7dc1b165', 'data_vg': 'ceph-6f2a493f-ee42-5e89-bc68-fb4f7dc1b165'})  2026-01-17 00:45:39.135431 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-fbc9b557-fafa-5136-b4c6-7d286dd557bb', 'data_vg': 'ceph-fbc9b557-fafa-5136-b4c6-7d286dd557bb'})  2026-01-17 00:45:39.135441 | orchestrator | skipping: [testbed-node-4] 2026-01-17 00:45:39.135452 | orchestrator | 2026-01-17 00:45:39.135462 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2026-01-17 00:45:39.135473 | orchestrator | Saturday 17 January 2026 00:45:36 +0000 (0:00:00.379) 0:00:49.633 ****** 2026-01-17 00:45:39.135484 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-6f2a493f-ee42-5e89-bc68-fb4f7dc1b165', 'data_vg': 'ceph-6f2a493f-ee42-5e89-bc68-fb4f7dc1b165'})  2026-01-17 00:45:39.135495 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-fbc9b557-fafa-5136-b4c6-7d286dd557bb', 'data_vg': 'ceph-fbc9b557-fafa-5136-b4c6-7d286dd557bb'})  2026-01-17 00:45:39.135505 | orchestrator | skipping: [testbed-node-4] 2026-01-17 00:45:39.135517 | orchestrator | 2026-01-17 00:45:39.135545 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2026-01-17 00:45:39.135556 | orchestrator | Saturday 17 January 2026 00:45:36 +0000 (0:00:00.159) 0:00:49.792 ****** 2026-01-17 00:45:39.135567 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-6f2a493f-ee42-5e89-bc68-fb4f7dc1b165', 'data_vg': 'ceph-6f2a493f-ee42-5e89-bc68-fb4f7dc1b165'})  2026-01-17 00:45:39.135578 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-fbc9b557-fafa-5136-b4c6-7d286dd557bb', 'data_vg': 'ceph-fbc9b557-fafa-5136-b4c6-7d286dd557bb'})  2026-01-17 00:45:39.135589 | orchestrator | skipping: [testbed-node-4] 2026-01-17 00:45:39.135599 | orchestrator | 2026-01-17 00:45:39.135610 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2026-01-17 00:45:39.135620 | orchestrator | Saturday 17 January 2026 00:45:36 +0000 (0:00:00.183) 0:00:49.976 ****** 2026-01-17 00:45:39.135632 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-6f2a493f-ee42-5e89-bc68-fb4f7dc1b165', 'data_vg': 'ceph-6f2a493f-ee42-5e89-bc68-fb4f7dc1b165'})  2026-01-17 00:45:39.135643 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-fbc9b557-fafa-5136-b4c6-7d286dd557bb', 'data_vg': 'ceph-fbc9b557-fafa-5136-b4c6-7d286dd557bb'})  2026-01-17 00:45:39.135653 | orchestrator | skipping: [testbed-node-4] 2026-01-17 00:45:39.135664 | orchestrator | 2026-01-17 00:45:39.135675 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2026-01-17 00:45:39.135685 | orchestrator | Saturday 17 January 2026 00:45:37 +0000 (0:00:00.186) 0:00:50.163 ****** 2026-01-17 00:45:39.135696 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-6f2a493f-ee42-5e89-bc68-fb4f7dc1b165', 'data_vg': 'ceph-6f2a493f-ee42-5e89-bc68-fb4f7dc1b165'})  2026-01-17 00:45:39.135707 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-fbc9b557-fafa-5136-b4c6-7d286dd557bb', 'data_vg': 'ceph-fbc9b557-fafa-5136-b4c6-7d286dd557bb'})  2026-01-17 00:45:39.135717 | orchestrator | skipping: [testbed-node-4] 2026-01-17 00:45:39.135728 | orchestrator | 2026-01-17 00:45:39.135739 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2026-01-17 00:45:39.135749 | orchestrator | Saturday 17 January 2026 00:45:37 +0000 (0:00:00.169) 0:00:50.333 ****** 2026-01-17 00:45:39.135767 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-6f2a493f-ee42-5e89-bc68-fb4f7dc1b165', 'data_vg': 'ceph-6f2a493f-ee42-5e89-bc68-fb4f7dc1b165'})  2026-01-17 00:45:39.135783 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-fbc9b557-fafa-5136-b4c6-7d286dd557bb', 'data_vg': 'ceph-fbc9b557-fafa-5136-b4c6-7d286dd557bb'})  2026-01-17 00:45:39.135795 | orchestrator | skipping: [testbed-node-4] 2026-01-17 00:45:39.135805 | orchestrator | 2026-01-17 00:45:39.135816 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2026-01-17 00:45:39.135827 | orchestrator | Saturday 17 January 2026 00:45:37 +0000 (0:00:00.162) 0:00:50.495 ****** 2026-01-17 00:45:39.135837 | orchestrator | ok: [testbed-node-4] 2026-01-17 00:45:39.135848 | orchestrator | 2026-01-17 00:45:39.135859 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2026-01-17 00:45:39.135869 | orchestrator | Saturday 17 January 2026 00:45:37 +0000 (0:00:00.535) 0:00:51.030 ****** 2026-01-17 00:45:39.135880 | orchestrator | ok: [testbed-node-4] 2026-01-17 00:45:39.135890 | orchestrator | 2026-01-17 00:45:39.135901 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2026-01-17 00:45:39.135911 | orchestrator | Saturday 17 January 2026 00:45:38 +0000 (0:00:00.517) 0:00:51.548 ****** 2026-01-17 00:45:39.135922 | orchestrator | ok: [testbed-node-4] 2026-01-17 00:45:39.135932 | orchestrator | 2026-01-17 00:45:39.135943 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2026-01-17 00:45:39.135953 | orchestrator | Saturday 17 January 2026 00:45:38 +0000 (0:00:00.140) 0:00:51.688 ****** 2026-01-17 00:45:39.135964 | orchestrator | ok: [testbed-node-4] => (item={'lv_name': 'osd-block-6f2a493f-ee42-5e89-bc68-fb4f7dc1b165', 'vg_name': 'ceph-6f2a493f-ee42-5e89-bc68-fb4f7dc1b165'}) 2026-01-17 00:45:39.135975 | orchestrator | ok: [testbed-node-4] => (item={'lv_name': 'osd-block-fbc9b557-fafa-5136-b4c6-7d286dd557bb', 'vg_name': 'ceph-fbc9b557-fafa-5136-b4c6-7d286dd557bb'}) 2026-01-17 00:45:39.135986 | orchestrator | 2026-01-17 00:45:39.135997 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2026-01-17 00:45:39.136007 | orchestrator | Saturday 17 January 2026 00:45:38 +0000 (0:00:00.195) 0:00:51.883 ****** 2026-01-17 00:45:39.136018 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-6f2a493f-ee42-5e89-bc68-fb4f7dc1b165', 'data_vg': 'ceph-6f2a493f-ee42-5e89-bc68-fb4f7dc1b165'})  2026-01-17 00:45:39.136029 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-fbc9b557-fafa-5136-b4c6-7d286dd557bb', 'data_vg': 'ceph-fbc9b557-fafa-5136-b4c6-7d286dd557bb'})  2026-01-17 00:45:39.136039 | orchestrator | skipping: [testbed-node-4] 2026-01-17 00:45:39.136050 | orchestrator | 2026-01-17 00:45:39.136061 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2026-01-17 00:45:39.136072 | orchestrator | Saturday 17 January 2026 00:45:38 +0000 (0:00:00.164) 0:00:52.047 ****** 2026-01-17 00:45:39.136082 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-6f2a493f-ee42-5e89-bc68-fb4f7dc1b165', 'data_vg': 'ceph-6f2a493f-ee42-5e89-bc68-fb4f7dc1b165'})  2026-01-17 00:45:39.136100 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-fbc9b557-fafa-5136-b4c6-7d286dd557bb', 'data_vg': 'ceph-fbc9b557-fafa-5136-b4c6-7d286dd557bb'})  2026-01-17 00:45:45.450798 | orchestrator | skipping: [testbed-node-4] 2026-01-17 00:45:45.450895 | orchestrator | 2026-01-17 00:45:45.450908 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2026-01-17 00:45:45.450918 | orchestrator | Saturday 17 January 2026 00:45:39 +0000 (0:00:00.215) 0:00:52.262 ****** 2026-01-17 00:45:45.450926 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-6f2a493f-ee42-5e89-bc68-fb4f7dc1b165', 'data_vg': 'ceph-6f2a493f-ee42-5e89-bc68-fb4f7dc1b165'})  2026-01-17 00:45:45.450936 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-fbc9b557-fafa-5136-b4c6-7d286dd557bb', 'data_vg': 'ceph-fbc9b557-fafa-5136-b4c6-7d286dd557bb'})  2026-01-17 00:45:45.450944 | orchestrator | skipping: [testbed-node-4] 2026-01-17 00:45:45.450975 | orchestrator | 2026-01-17 00:45:45.450984 | orchestrator | TASK [Print LVM report data] *************************************************** 2026-01-17 00:45:45.450993 | orchestrator | Saturday 17 January 2026 00:45:39 +0000 (0:00:00.158) 0:00:52.420 ****** 2026-01-17 00:45:45.451001 | orchestrator | ok: [testbed-node-4] => { 2026-01-17 00:45:45.451009 | orchestrator |  "lvm_report": { 2026-01-17 00:45:45.451019 | orchestrator |  "lv": [ 2026-01-17 00:45:45.451027 | orchestrator |  { 2026-01-17 00:45:45.451035 | orchestrator |  "lv_name": "osd-block-6f2a493f-ee42-5e89-bc68-fb4f7dc1b165", 2026-01-17 00:45:45.451045 | orchestrator |  "vg_name": "ceph-6f2a493f-ee42-5e89-bc68-fb4f7dc1b165" 2026-01-17 00:45:45.451052 | orchestrator |  }, 2026-01-17 00:45:45.451060 | orchestrator |  { 2026-01-17 00:45:45.451068 | orchestrator |  "lv_name": "osd-block-fbc9b557-fafa-5136-b4c6-7d286dd557bb", 2026-01-17 00:45:45.451076 | orchestrator |  "vg_name": "ceph-fbc9b557-fafa-5136-b4c6-7d286dd557bb" 2026-01-17 00:45:45.451084 | orchestrator |  } 2026-01-17 00:45:45.451092 | orchestrator |  ], 2026-01-17 00:45:45.451100 | orchestrator |  "pv": [ 2026-01-17 00:45:45.451108 | orchestrator |  { 2026-01-17 00:45:45.451116 | orchestrator |  "pv_name": "/dev/sdb", 2026-01-17 00:45:45.451123 | orchestrator |  "vg_name": "ceph-6f2a493f-ee42-5e89-bc68-fb4f7dc1b165" 2026-01-17 00:45:45.451131 | orchestrator |  }, 2026-01-17 00:45:45.451139 | orchestrator |  { 2026-01-17 00:45:45.451147 | orchestrator |  "pv_name": "/dev/sdc", 2026-01-17 00:45:45.451155 | orchestrator |  "vg_name": "ceph-fbc9b557-fafa-5136-b4c6-7d286dd557bb" 2026-01-17 00:45:45.451163 | orchestrator |  } 2026-01-17 00:45:45.451170 | orchestrator |  ] 2026-01-17 00:45:45.451178 | orchestrator |  } 2026-01-17 00:45:45.451187 | orchestrator | } 2026-01-17 00:45:45.451195 | orchestrator | 2026-01-17 00:45:45.451203 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2026-01-17 00:45:45.451211 | orchestrator | 2026-01-17 00:45:45.451218 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-01-17 00:45:45.451226 | orchestrator | Saturday 17 January 2026 00:45:39 +0000 (0:00:00.521) 0:00:52.942 ****** 2026-01-17 00:45:45.451234 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2026-01-17 00:45:45.451300 | orchestrator | 2026-01-17 00:45:45.451309 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-01-17 00:45:45.451317 | orchestrator | Saturday 17 January 2026 00:45:40 +0000 (0:00:00.257) 0:00:53.199 ****** 2026-01-17 00:45:45.451325 | orchestrator | ok: [testbed-node-5] 2026-01-17 00:45:45.451333 | orchestrator | 2026-01-17 00:45:45.451343 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-17 00:45:45.451352 | orchestrator | Saturday 17 January 2026 00:45:40 +0000 (0:00:00.251) 0:00:53.451 ****** 2026-01-17 00:45:45.451361 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop0) 2026-01-17 00:45:45.451371 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop1) 2026-01-17 00:45:45.451380 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop2) 2026-01-17 00:45:45.451388 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop3) 2026-01-17 00:45:45.451397 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop4) 2026-01-17 00:45:45.451406 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop5) 2026-01-17 00:45:45.451416 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop6) 2026-01-17 00:45:45.451426 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop7) 2026-01-17 00:45:45.451436 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sda) 2026-01-17 00:45:45.451458 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdb) 2026-01-17 00:45:45.451473 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdc) 2026-01-17 00:45:45.451487 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdd) 2026-01-17 00:45:45.451500 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sr0) 2026-01-17 00:45:45.451514 | orchestrator | 2026-01-17 00:45:45.451532 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-17 00:45:45.451545 | orchestrator | Saturday 17 January 2026 00:45:40 +0000 (0:00:00.420) 0:00:53.872 ****** 2026-01-17 00:45:45.451558 | orchestrator | skipping: [testbed-node-5] 2026-01-17 00:45:45.451572 | orchestrator | 2026-01-17 00:45:45.451585 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-17 00:45:45.451597 | orchestrator | Saturday 17 January 2026 00:45:40 +0000 (0:00:00.219) 0:00:54.091 ****** 2026-01-17 00:45:45.451609 | orchestrator | skipping: [testbed-node-5] 2026-01-17 00:45:45.451622 | orchestrator | 2026-01-17 00:45:45.451634 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-17 00:45:45.451670 | orchestrator | Saturday 17 January 2026 00:45:41 +0000 (0:00:00.209) 0:00:54.301 ****** 2026-01-17 00:45:45.451684 | orchestrator | skipping: [testbed-node-5] 2026-01-17 00:45:45.451698 | orchestrator | 2026-01-17 00:45:45.451711 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-17 00:45:45.451724 | orchestrator | Saturday 17 January 2026 00:45:41 +0000 (0:00:00.202) 0:00:54.503 ****** 2026-01-17 00:45:45.451737 | orchestrator | skipping: [testbed-node-5] 2026-01-17 00:45:45.451751 | orchestrator | 2026-01-17 00:45:45.451764 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-17 00:45:45.451836 | orchestrator | Saturday 17 January 2026 00:45:41 +0000 (0:00:00.225) 0:00:54.729 ****** 2026-01-17 00:45:45.451855 | orchestrator | skipping: [testbed-node-5] 2026-01-17 00:45:45.451871 | orchestrator | 2026-01-17 00:45:45.451887 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-17 00:45:45.451902 | orchestrator | Saturday 17 January 2026 00:45:42 +0000 (0:00:00.653) 0:00:55.382 ****** 2026-01-17 00:45:45.451917 | orchestrator | skipping: [testbed-node-5] 2026-01-17 00:45:45.451931 | orchestrator | 2026-01-17 00:45:45.451948 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-17 00:45:45.451963 | orchestrator | Saturday 17 January 2026 00:45:42 +0000 (0:00:00.192) 0:00:55.575 ****** 2026-01-17 00:45:45.451977 | orchestrator | skipping: [testbed-node-5] 2026-01-17 00:45:45.451991 | orchestrator | 2026-01-17 00:45:45.452006 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-17 00:45:45.452021 | orchestrator | Saturday 17 January 2026 00:45:42 +0000 (0:00:00.210) 0:00:55.785 ****** 2026-01-17 00:45:45.452036 | orchestrator | skipping: [testbed-node-5] 2026-01-17 00:45:45.452050 | orchestrator | 2026-01-17 00:45:45.452065 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-17 00:45:45.452080 | orchestrator | Saturday 17 January 2026 00:45:42 +0000 (0:00:00.204) 0:00:55.989 ****** 2026-01-17 00:45:45.452095 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_4a12610b-6fe3-4cad-9944-f8a257ec9d82) 2026-01-17 00:45:45.452110 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_4a12610b-6fe3-4cad-9944-f8a257ec9d82) 2026-01-17 00:45:45.452126 | orchestrator | 2026-01-17 00:45:45.452141 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-17 00:45:45.452156 | orchestrator | Saturday 17 January 2026 00:45:43 +0000 (0:00:00.479) 0:00:56.469 ****** 2026-01-17 00:45:45.452171 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_653651ff-f0c3-4f93-a415-b7bde2938506) 2026-01-17 00:45:45.452185 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_653651ff-f0c3-4f93-a415-b7bde2938506) 2026-01-17 00:45:45.452200 | orchestrator | 2026-01-17 00:45:45.452229 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-17 00:45:45.452281 | orchestrator | Saturday 17 January 2026 00:45:43 +0000 (0:00:00.418) 0:00:56.888 ****** 2026-01-17 00:45:45.452296 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_3748448b-4cb4-41ff-a93c-c2a900d49ce0) 2026-01-17 00:45:45.452311 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_3748448b-4cb4-41ff-a93c-c2a900d49ce0) 2026-01-17 00:45:45.452324 | orchestrator | 2026-01-17 00:45:45.452337 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-17 00:45:45.452352 | orchestrator | Saturday 17 January 2026 00:45:44 +0000 (0:00:00.433) 0:00:57.321 ****** 2026-01-17 00:45:45.452366 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_b2725b1a-ab02-479a-b1d7-829717bc50e1) 2026-01-17 00:45:45.452380 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_b2725b1a-ab02-479a-b1d7-829717bc50e1) 2026-01-17 00:45:45.452394 | orchestrator | 2026-01-17 00:45:45.452402 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-17 00:45:45.452411 | orchestrator | Saturday 17 January 2026 00:45:44 +0000 (0:00:00.445) 0:00:57.767 ****** 2026-01-17 00:45:45.452419 | orchestrator | ok: [testbed-node-5] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-01-17 00:45:45.452428 | orchestrator | 2026-01-17 00:45:45.452436 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-17 00:45:45.452445 | orchestrator | Saturday 17 January 2026 00:45:45 +0000 (0:00:00.366) 0:00:58.133 ****** 2026-01-17 00:45:45.452453 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop0) 2026-01-17 00:45:45.452462 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop1) 2026-01-17 00:45:45.452470 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop2) 2026-01-17 00:45:45.452479 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop3) 2026-01-17 00:45:45.452487 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop4) 2026-01-17 00:45:45.452495 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop5) 2026-01-17 00:45:45.452504 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop6) 2026-01-17 00:45:45.452513 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop7) 2026-01-17 00:45:45.452521 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sda) 2026-01-17 00:45:45.452530 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdb) 2026-01-17 00:45:45.452538 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdc) 2026-01-17 00:45:45.452560 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdd) 2026-01-17 00:45:54.789274 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sr0) 2026-01-17 00:45:54.789374 | orchestrator | 2026-01-17 00:45:54.789383 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-17 00:45:54.789389 | orchestrator | Saturday 17 January 2026 00:45:45 +0000 (0:00:00.437) 0:00:58.571 ****** 2026-01-17 00:45:54.789394 | orchestrator | skipping: [testbed-node-5] 2026-01-17 00:45:54.789399 | orchestrator | 2026-01-17 00:45:54.789404 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-17 00:45:54.789409 | orchestrator | Saturday 17 January 2026 00:45:45 +0000 (0:00:00.213) 0:00:58.785 ****** 2026-01-17 00:45:54.789413 | orchestrator | skipping: [testbed-node-5] 2026-01-17 00:45:54.789418 | orchestrator | 2026-01-17 00:45:54.789422 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-17 00:45:54.789426 | orchestrator | Saturday 17 January 2026 00:45:46 +0000 (0:00:00.713) 0:00:59.498 ****** 2026-01-17 00:45:54.789446 | orchestrator | skipping: [testbed-node-5] 2026-01-17 00:45:54.789451 | orchestrator | 2026-01-17 00:45:54.789455 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-17 00:45:54.789460 | orchestrator | Saturday 17 January 2026 00:45:46 +0000 (0:00:00.203) 0:00:59.702 ****** 2026-01-17 00:45:54.789464 | orchestrator | skipping: [testbed-node-5] 2026-01-17 00:45:54.789468 | orchestrator | 2026-01-17 00:45:54.789473 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-17 00:45:54.789477 | orchestrator | Saturday 17 January 2026 00:45:46 +0000 (0:00:00.205) 0:00:59.908 ****** 2026-01-17 00:45:54.789481 | orchestrator | skipping: [testbed-node-5] 2026-01-17 00:45:54.789485 | orchestrator | 2026-01-17 00:45:54.789490 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-17 00:45:54.789494 | orchestrator | Saturday 17 January 2026 00:45:46 +0000 (0:00:00.220) 0:01:00.129 ****** 2026-01-17 00:45:54.789498 | orchestrator | skipping: [testbed-node-5] 2026-01-17 00:45:54.789502 | orchestrator | 2026-01-17 00:45:54.789507 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-17 00:45:54.789511 | orchestrator | Saturday 17 January 2026 00:45:47 +0000 (0:00:00.222) 0:01:00.352 ****** 2026-01-17 00:45:54.789515 | orchestrator | skipping: [testbed-node-5] 2026-01-17 00:45:54.789520 | orchestrator | 2026-01-17 00:45:54.789524 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-17 00:45:54.789528 | orchestrator | Saturday 17 January 2026 00:45:47 +0000 (0:00:00.194) 0:01:00.546 ****** 2026-01-17 00:45:54.789532 | orchestrator | skipping: [testbed-node-5] 2026-01-17 00:45:54.789537 | orchestrator | 2026-01-17 00:45:54.789541 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-17 00:45:54.789545 | orchestrator | Saturday 17 January 2026 00:45:47 +0000 (0:00:00.229) 0:01:00.775 ****** 2026-01-17 00:45:54.789558 | orchestrator | ok: [testbed-node-5] => (item=sda1) 2026-01-17 00:45:54.789564 | orchestrator | ok: [testbed-node-5] => (item=sda14) 2026-01-17 00:45:54.789568 | orchestrator | ok: [testbed-node-5] => (item=sda15) 2026-01-17 00:45:54.789573 | orchestrator | ok: [testbed-node-5] => (item=sda16) 2026-01-17 00:45:54.789577 | orchestrator | 2026-01-17 00:45:54.789581 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-17 00:45:54.789586 | orchestrator | Saturday 17 January 2026 00:45:48 +0000 (0:00:00.695) 0:01:01.471 ****** 2026-01-17 00:45:54.789590 | orchestrator | skipping: [testbed-node-5] 2026-01-17 00:45:54.789594 | orchestrator | 2026-01-17 00:45:54.789598 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-17 00:45:54.789603 | orchestrator | Saturday 17 January 2026 00:45:48 +0000 (0:00:00.185) 0:01:01.657 ****** 2026-01-17 00:45:54.789608 | orchestrator | skipping: [testbed-node-5] 2026-01-17 00:45:54.789612 | orchestrator | 2026-01-17 00:45:54.789616 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-17 00:45:54.789620 | orchestrator | Saturday 17 January 2026 00:45:48 +0000 (0:00:00.213) 0:01:01.871 ****** 2026-01-17 00:45:54.789625 | orchestrator | skipping: [testbed-node-5] 2026-01-17 00:45:54.789629 | orchestrator | 2026-01-17 00:45:54.789633 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-17 00:45:54.789637 | orchestrator | Saturday 17 January 2026 00:45:48 +0000 (0:00:00.208) 0:01:02.080 ****** 2026-01-17 00:45:54.789642 | orchestrator | skipping: [testbed-node-5] 2026-01-17 00:45:54.789646 | orchestrator | 2026-01-17 00:45:54.789650 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2026-01-17 00:45:54.789654 | orchestrator | Saturday 17 January 2026 00:45:49 +0000 (0:00:00.216) 0:01:02.297 ****** 2026-01-17 00:45:54.789659 | orchestrator | skipping: [testbed-node-5] 2026-01-17 00:45:54.789663 | orchestrator | 2026-01-17 00:45:54.789667 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2026-01-17 00:45:54.789672 | orchestrator | Saturday 17 January 2026 00:45:49 +0000 (0:00:00.350) 0:01:02.648 ****** 2026-01-17 00:45:54.789676 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'a3dfbdd8-de3c-56f7-9997-9a9b5f483001'}}) 2026-01-17 00:45:54.789686 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '68934a0c-2b18-58d2-8851-459d4d664360'}}) 2026-01-17 00:45:54.789691 | orchestrator | 2026-01-17 00:45:54.789695 | orchestrator | TASK [Create block VGs] ******************************************************** 2026-01-17 00:45:54.789700 | orchestrator | Saturday 17 January 2026 00:45:49 +0000 (0:00:00.222) 0:01:02.870 ****** 2026-01-17 00:45:54.789705 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-a3dfbdd8-de3c-56f7-9997-9a9b5f483001', 'data_vg': 'ceph-a3dfbdd8-de3c-56f7-9997-9a9b5f483001'}) 2026-01-17 00:45:54.789710 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-68934a0c-2b18-58d2-8851-459d4d664360', 'data_vg': 'ceph-68934a0c-2b18-58d2-8851-459d4d664360'}) 2026-01-17 00:45:54.789714 | orchestrator | 2026-01-17 00:45:54.789718 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2026-01-17 00:45:54.789733 | orchestrator | Saturday 17 January 2026 00:45:51 +0000 (0:00:01.863) 0:01:04.734 ****** 2026-01-17 00:45:54.789738 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-a3dfbdd8-de3c-56f7-9997-9a9b5f483001', 'data_vg': 'ceph-a3dfbdd8-de3c-56f7-9997-9a9b5f483001'})  2026-01-17 00:45:54.789743 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-68934a0c-2b18-58d2-8851-459d4d664360', 'data_vg': 'ceph-68934a0c-2b18-58d2-8851-459d4d664360'})  2026-01-17 00:45:54.789747 | orchestrator | skipping: [testbed-node-5] 2026-01-17 00:45:54.789752 | orchestrator | 2026-01-17 00:45:54.789756 | orchestrator | TASK [Create block LVs] ******************************************************** 2026-01-17 00:45:54.789761 | orchestrator | Saturday 17 January 2026 00:45:51 +0000 (0:00:00.162) 0:01:04.896 ****** 2026-01-17 00:45:54.789765 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-a3dfbdd8-de3c-56f7-9997-9a9b5f483001', 'data_vg': 'ceph-a3dfbdd8-de3c-56f7-9997-9a9b5f483001'}) 2026-01-17 00:45:54.789770 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-68934a0c-2b18-58d2-8851-459d4d664360', 'data_vg': 'ceph-68934a0c-2b18-58d2-8851-459d4d664360'}) 2026-01-17 00:45:54.789774 | orchestrator | 2026-01-17 00:45:54.789778 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2026-01-17 00:45:54.789782 | orchestrator | Saturday 17 January 2026 00:45:53 +0000 (0:00:01.385) 0:01:06.282 ****** 2026-01-17 00:45:54.789787 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-a3dfbdd8-de3c-56f7-9997-9a9b5f483001', 'data_vg': 'ceph-a3dfbdd8-de3c-56f7-9997-9a9b5f483001'})  2026-01-17 00:45:54.789791 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-68934a0c-2b18-58d2-8851-459d4d664360', 'data_vg': 'ceph-68934a0c-2b18-58d2-8851-459d4d664360'})  2026-01-17 00:45:54.789795 | orchestrator | skipping: [testbed-node-5] 2026-01-17 00:45:54.789800 | orchestrator | 2026-01-17 00:45:54.789804 | orchestrator | TASK [Create DB VGs] *********************************************************** 2026-01-17 00:45:54.789808 | orchestrator | Saturday 17 January 2026 00:45:53 +0000 (0:00:00.186) 0:01:06.469 ****** 2026-01-17 00:45:54.789812 | orchestrator | skipping: [testbed-node-5] 2026-01-17 00:45:54.789817 | orchestrator | 2026-01-17 00:45:54.789821 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2026-01-17 00:45:54.789825 | orchestrator | Saturday 17 January 2026 00:45:53 +0000 (0:00:00.141) 0:01:06.610 ****** 2026-01-17 00:45:54.789833 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-a3dfbdd8-de3c-56f7-9997-9a9b5f483001', 'data_vg': 'ceph-a3dfbdd8-de3c-56f7-9997-9a9b5f483001'})  2026-01-17 00:45:54.789838 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-68934a0c-2b18-58d2-8851-459d4d664360', 'data_vg': 'ceph-68934a0c-2b18-58d2-8851-459d4d664360'})  2026-01-17 00:45:54.789843 | orchestrator | skipping: [testbed-node-5] 2026-01-17 00:45:54.789848 | orchestrator | 2026-01-17 00:45:54.789853 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2026-01-17 00:45:54.789862 | orchestrator | Saturday 17 January 2026 00:45:53 +0000 (0:00:00.175) 0:01:06.786 ****** 2026-01-17 00:45:54.789866 | orchestrator | skipping: [testbed-node-5] 2026-01-17 00:45:54.789871 | orchestrator | 2026-01-17 00:45:54.789876 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2026-01-17 00:45:54.789881 | orchestrator | Saturday 17 January 2026 00:45:53 +0000 (0:00:00.137) 0:01:06.923 ****** 2026-01-17 00:45:54.789886 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-a3dfbdd8-de3c-56f7-9997-9a9b5f483001', 'data_vg': 'ceph-a3dfbdd8-de3c-56f7-9997-9a9b5f483001'})  2026-01-17 00:45:54.789891 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-68934a0c-2b18-58d2-8851-459d4d664360', 'data_vg': 'ceph-68934a0c-2b18-58d2-8851-459d4d664360'})  2026-01-17 00:45:54.789896 | orchestrator | skipping: [testbed-node-5] 2026-01-17 00:45:54.789901 | orchestrator | 2026-01-17 00:45:54.789906 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2026-01-17 00:45:54.789911 | orchestrator | Saturday 17 January 2026 00:45:53 +0000 (0:00:00.156) 0:01:07.079 ****** 2026-01-17 00:45:54.789916 | orchestrator | skipping: [testbed-node-5] 2026-01-17 00:45:54.789921 | orchestrator | 2026-01-17 00:45:54.789926 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2026-01-17 00:45:54.789930 | orchestrator | Saturday 17 January 2026 00:45:54 +0000 (0:00:00.150) 0:01:07.230 ****** 2026-01-17 00:45:54.789935 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-a3dfbdd8-de3c-56f7-9997-9a9b5f483001', 'data_vg': 'ceph-a3dfbdd8-de3c-56f7-9997-9a9b5f483001'})  2026-01-17 00:45:54.789940 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-68934a0c-2b18-58d2-8851-459d4d664360', 'data_vg': 'ceph-68934a0c-2b18-58d2-8851-459d4d664360'})  2026-01-17 00:45:54.789945 | orchestrator | skipping: [testbed-node-5] 2026-01-17 00:45:54.789952 | orchestrator | 2026-01-17 00:45:54.789960 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2026-01-17 00:45:54.789967 | orchestrator | Saturday 17 January 2026 00:45:54 +0000 (0:00:00.140) 0:01:07.370 ****** 2026-01-17 00:45:54.789972 | orchestrator | ok: [testbed-node-5] 2026-01-17 00:45:54.789976 | orchestrator | 2026-01-17 00:45:54.789981 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2026-01-17 00:45:54.789986 | orchestrator | Saturday 17 January 2026 00:45:54 +0000 (0:00:00.366) 0:01:07.737 ****** 2026-01-17 00:45:54.789995 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-a3dfbdd8-de3c-56f7-9997-9a9b5f483001', 'data_vg': 'ceph-a3dfbdd8-de3c-56f7-9997-9a9b5f483001'})  2026-01-17 00:46:01.059232 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-68934a0c-2b18-58d2-8851-459d4d664360', 'data_vg': 'ceph-68934a0c-2b18-58d2-8851-459d4d664360'})  2026-01-17 00:46:01.059443 | orchestrator | skipping: [testbed-node-5] 2026-01-17 00:46:01.059466 | orchestrator | 2026-01-17 00:46:01.059480 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2026-01-17 00:46:01.059493 | orchestrator | Saturday 17 January 2026 00:45:54 +0000 (0:00:00.178) 0:01:07.916 ****** 2026-01-17 00:46:01.059505 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-a3dfbdd8-de3c-56f7-9997-9a9b5f483001', 'data_vg': 'ceph-a3dfbdd8-de3c-56f7-9997-9a9b5f483001'})  2026-01-17 00:46:01.059517 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-68934a0c-2b18-58d2-8851-459d4d664360', 'data_vg': 'ceph-68934a0c-2b18-58d2-8851-459d4d664360'})  2026-01-17 00:46:01.059528 | orchestrator | skipping: [testbed-node-5] 2026-01-17 00:46:01.059538 | orchestrator | 2026-01-17 00:46:01.059550 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2026-01-17 00:46:01.059560 | orchestrator | Saturday 17 January 2026 00:45:54 +0000 (0:00:00.181) 0:01:08.097 ****** 2026-01-17 00:46:01.059571 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-a3dfbdd8-de3c-56f7-9997-9a9b5f483001', 'data_vg': 'ceph-a3dfbdd8-de3c-56f7-9997-9a9b5f483001'})  2026-01-17 00:46:01.059582 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-68934a0c-2b18-58d2-8851-459d4d664360', 'data_vg': 'ceph-68934a0c-2b18-58d2-8851-459d4d664360'})  2026-01-17 00:46:01.059622 | orchestrator | skipping: [testbed-node-5] 2026-01-17 00:46:01.059634 | orchestrator | 2026-01-17 00:46:01.059645 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2026-01-17 00:46:01.059656 | orchestrator | Saturday 17 January 2026 00:45:55 +0000 (0:00:00.165) 0:01:08.262 ****** 2026-01-17 00:46:01.059667 | orchestrator | skipping: [testbed-node-5] 2026-01-17 00:46:01.059677 | orchestrator | 2026-01-17 00:46:01.059688 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2026-01-17 00:46:01.059699 | orchestrator | Saturday 17 January 2026 00:45:55 +0000 (0:00:00.147) 0:01:08.409 ****** 2026-01-17 00:46:01.059710 | orchestrator | skipping: [testbed-node-5] 2026-01-17 00:46:01.059724 | orchestrator | 2026-01-17 00:46:01.059738 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2026-01-17 00:46:01.059751 | orchestrator | Saturday 17 January 2026 00:45:55 +0000 (0:00:00.130) 0:01:08.540 ****** 2026-01-17 00:46:01.059764 | orchestrator | skipping: [testbed-node-5] 2026-01-17 00:46:01.059777 | orchestrator | 2026-01-17 00:46:01.059791 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2026-01-17 00:46:01.059803 | orchestrator | Saturday 17 January 2026 00:45:55 +0000 (0:00:00.151) 0:01:08.692 ****** 2026-01-17 00:46:01.059816 | orchestrator | ok: [testbed-node-5] => { 2026-01-17 00:46:01.059830 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2026-01-17 00:46:01.059844 | orchestrator | } 2026-01-17 00:46:01.059857 | orchestrator | 2026-01-17 00:46:01.059871 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2026-01-17 00:46:01.059884 | orchestrator | Saturday 17 January 2026 00:45:55 +0000 (0:00:00.146) 0:01:08.838 ****** 2026-01-17 00:46:01.059897 | orchestrator | ok: [testbed-node-5] => { 2026-01-17 00:46:01.059910 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2026-01-17 00:46:01.059923 | orchestrator | } 2026-01-17 00:46:01.059936 | orchestrator | 2026-01-17 00:46:01.059949 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2026-01-17 00:46:01.059962 | orchestrator | Saturday 17 January 2026 00:45:55 +0000 (0:00:00.149) 0:01:08.988 ****** 2026-01-17 00:46:01.059975 | orchestrator | ok: [testbed-node-5] => { 2026-01-17 00:46:01.059988 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2026-01-17 00:46:01.060001 | orchestrator | } 2026-01-17 00:46:01.060013 | orchestrator | 2026-01-17 00:46:01.060026 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2026-01-17 00:46:01.060042 | orchestrator | Saturday 17 January 2026 00:45:56 +0000 (0:00:00.163) 0:01:09.151 ****** 2026-01-17 00:46:01.060062 | orchestrator | ok: [testbed-node-5] 2026-01-17 00:46:01.060080 | orchestrator | 2026-01-17 00:46:01.060098 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2026-01-17 00:46:01.060117 | orchestrator | Saturday 17 January 2026 00:45:56 +0000 (0:00:00.522) 0:01:09.674 ****** 2026-01-17 00:46:01.060136 | orchestrator | ok: [testbed-node-5] 2026-01-17 00:46:01.060155 | orchestrator | 2026-01-17 00:46:01.060171 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2026-01-17 00:46:01.060188 | orchestrator | Saturday 17 January 2026 00:45:57 +0000 (0:00:00.526) 0:01:10.200 ****** 2026-01-17 00:46:01.060207 | orchestrator | ok: [testbed-node-5] 2026-01-17 00:46:01.060226 | orchestrator | 2026-01-17 00:46:01.060243 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2026-01-17 00:46:01.060288 | orchestrator | Saturday 17 January 2026 00:45:57 +0000 (0:00:00.744) 0:01:10.944 ****** 2026-01-17 00:46:01.060306 | orchestrator | ok: [testbed-node-5] 2026-01-17 00:46:01.060325 | orchestrator | 2026-01-17 00:46:01.060345 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2026-01-17 00:46:01.060365 | orchestrator | Saturday 17 January 2026 00:45:57 +0000 (0:00:00.152) 0:01:11.097 ****** 2026-01-17 00:46:01.060384 | orchestrator | skipping: [testbed-node-5] 2026-01-17 00:46:01.060404 | orchestrator | 2026-01-17 00:46:01.060422 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2026-01-17 00:46:01.060451 | orchestrator | Saturday 17 January 2026 00:45:58 +0000 (0:00:00.112) 0:01:11.209 ****** 2026-01-17 00:46:01.060462 | orchestrator | skipping: [testbed-node-5] 2026-01-17 00:46:01.060473 | orchestrator | 2026-01-17 00:46:01.060484 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2026-01-17 00:46:01.060516 | orchestrator | Saturday 17 January 2026 00:45:58 +0000 (0:00:00.135) 0:01:11.345 ****** 2026-01-17 00:46:01.060528 | orchestrator | ok: [testbed-node-5] => { 2026-01-17 00:46:01.060539 | orchestrator |  "vgs_report": { 2026-01-17 00:46:01.060550 | orchestrator |  "vg": [] 2026-01-17 00:46:01.060584 | orchestrator |  } 2026-01-17 00:46:01.060596 | orchestrator | } 2026-01-17 00:46:01.060607 | orchestrator | 2026-01-17 00:46:01.060618 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2026-01-17 00:46:01.060629 | orchestrator | Saturday 17 January 2026 00:45:58 +0000 (0:00:00.142) 0:01:11.487 ****** 2026-01-17 00:46:01.060639 | orchestrator | skipping: [testbed-node-5] 2026-01-17 00:46:01.060650 | orchestrator | 2026-01-17 00:46:01.060667 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2026-01-17 00:46:01.060685 | orchestrator | Saturday 17 January 2026 00:45:58 +0000 (0:00:00.146) 0:01:11.633 ****** 2026-01-17 00:46:01.060703 | orchestrator | skipping: [testbed-node-5] 2026-01-17 00:46:01.060721 | orchestrator | 2026-01-17 00:46:01.060739 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2026-01-17 00:46:01.060760 | orchestrator | Saturday 17 January 2026 00:45:58 +0000 (0:00:00.148) 0:01:11.782 ****** 2026-01-17 00:46:01.060776 | orchestrator | skipping: [testbed-node-5] 2026-01-17 00:46:01.060787 | orchestrator | 2026-01-17 00:46:01.060798 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2026-01-17 00:46:01.060809 | orchestrator | Saturday 17 January 2026 00:45:58 +0000 (0:00:00.129) 0:01:11.912 ****** 2026-01-17 00:46:01.060820 | orchestrator | skipping: [testbed-node-5] 2026-01-17 00:46:01.060831 | orchestrator | 2026-01-17 00:46:01.060842 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2026-01-17 00:46:01.060852 | orchestrator | Saturday 17 January 2026 00:45:58 +0000 (0:00:00.138) 0:01:12.051 ****** 2026-01-17 00:46:01.060863 | orchestrator | skipping: [testbed-node-5] 2026-01-17 00:46:01.060874 | orchestrator | 2026-01-17 00:46:01.060885 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2026-01-17 00:46:01.060896 | orchestrator | Saturday 17 January 2026 00:45:59 +0000 (0:00:00.141) 0:01:12.192 ****** 2026-01-17 00:46:01.060907 | orchestrator | skipping: [testbed-node-5] 2026-01-17 00:46:01.060918 | orchestrator | 2026-01-17 00:46:01.060928 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2026-01-17 00:46:01.060939 | orchestrator | Saturday 17 January 2026 00:45:59 +0000 (0:00:00.137) 0:01:12.330 ****** 2026-01-17 00:46:01.060950 | orchestrator | skipping: [testbed-node-5] 2026-01-17 00:46:01.060961 | orchestrator | 2026-01-17 00:46:01.060972 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2026-01-17 00:46:01.060983 | orchestrator | Saturday 17 January 2026 00:45:59 +0000 (0:00:00.144) 0:01:12.475 ****** 2026-01-17 00:46:01.060994 | orchestrator | skipping: [testbed-node-5] 2026-01-17 00:46:01.061004 | orchestrator | 2026-01-17 00:46:01.061015 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2026-01-17 00:46:01.061026 | orchestrator | Saturday 17 January 2026 00:45:59 +0000 (0:00:00.365) 0:01:12.840 ****** 2026-01-17 00:46:01.061037 | orchestrator | skipping: [testbed-node-5] 2026-01-17 00:46:01.061048 | orchestrator | 2026-01-17 00:46:01.061065 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2026-01-17 00:46:01.061076 | orchestrator | Saturday 17 January 2026 00:45:59 +0000 (0:00:00.144) 0:01:12.985 ****** 2026-01-17 00:46:01.061087 | orchestrator | skipping: [testbed-node-5] 2026-01-17 00:46:01.061098 | orchestrator | 2026-01-17 00:46:01.061109 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2026-01-17 00:46:01.061129 | orchestrator | Saturday 17 January 2026 00:45:59 +0000 (0:00:00.142) 0:01:13.127 ****** 2026-01-17 00:46:01.061140 | orchestrator | skipping: [testbed-node-5] 2026-01-17 00:46:01.061151 | orchestrator | 2026-01-17 00:46:01.061162 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2026-01-17 00:46:01.061173 | orchestrator | Saturday 17 January 2026 00:46:00 +0000 (0:00:00.151) 0:01:13.279 ****** 2026-01-17 00:46:01.061184 | orchestrator | skipping: [testbed-node-5] 2026-01-17 00:46:01.061195 | orchestrator | 2026-01-17 00:46:01.061206 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2026-01-17 00:46:01.061217 | orchestrator | Saturday 17 January 2026 00:46:00 +0000 (0:00:00.142) 0:01:13.421 ****** 2026-01-17 00:46:01.061227 | orchestrator | skipping: [testbed-node-5] 2026-01-17 00:46:01.061238 | orchestrator | 2026-01-17 00:46:01.061249 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2026-01-17 00:46:01.061299 | orchestrator | Saturday 17 January 2026 00:46:00 +0000 (0:00:00.125) 0:01:13.547 ****** 2026-01-17 00:46:01.061319 | orchestrator | skipping: [testbed-node-5] 2026-01-17 00:46:01.061339 | orchestrator | 2026-01-17 00:46:01.061358 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2026-01-17 00:46:01.061373 | orchestrator | Saturday 17 January 2026 00:46:00 +0000 (0:00:00.138) 0:01:13.685 ****** 2026-01-17 00:46:01.061386 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-a3dfbdd8-de3c-56f7-9997-9a9b5f483001', 'data_vg': 'ceph-a3dfbdd8-de3c-56f7-9997-9a9b5f483001'})  2026-01-17 00:46:01.061397 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-68934a0c-2b18-58d2-8851-459d4d664360', 'data_vg': 'ceph-68934a0c-2b18-58d2-8851-459d4d664360'})  2026-01-17 00:46:01.061408 | orchestrator | skipping: [testbed-node-5] 2026-01-17 00:46:01.061419 | orchestrator | 2026-01-17 00:46:01.061430 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2026-01-17 00:46:01.061441 | orchestrator | Saturday 17 January 2026 00:46:00 +0000 (0:00:00.150) 0:01:13.836 ****** 2026-01-17 00:46:01.061452 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-a3dfbdd8-de3c-56f7-9997-9a9b5f483001', 'data_vg': 'ceph-a3dfbdd8-de3c-56f7-9997-9a9b5f483001'})  2026-01-17 00:46:01.061463 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-68934a0c-2b18-58d2-8851-459d4d664360', 'data_vg': 'ceph-68934a0c-2b18-58d2-8851-459d4d664360'})  2026-01-17 00:46:01.061474 | orchestrator | skipping: [testbed-node-5] 2026-01-17 00:46:01.061485 | orchestrator | 2026-01-17 00:46:01.061496 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2026-01-17 00:46:01.061507 | orchestrator | Saturday 17 January 2026 00:46:00 +0000 (0:00:00.172) 0:01:14.008 ****** 2026-01-17 00:46:01.061527 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-a3dfbdd8-de3c-56f7-9997-9a9b5f483001', 'data_vg': 'ceph-a3dfbdd8-de3c-56f7-9997-9a9b5f483001'})  2026-01-17 00:46:04.170610 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-68934a0c-2b18-58d2-8851-459d4d664360', 'data_vg': 'ceph-68934a0c-2b18-58d2-8851-459d4d664360'})  2026-01-17 00:46:04.170733 | orchestrator | skipping: [testbed-node-5] 2026-01-17 00:46:04.170759 | orchestrator | 2026-01-17 00:46:04.170781 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2026-01-17 00:46:04.171583 | orchestrator | Saturday 17 January 2026 00:46:01 +0000 (0:00:00.177) 0:01:14.186 ****** 2026-01-17 00:46:04.171617 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-a3dfbdd8-de3c-56f7-9997-9a9b5f483001', 'data_vg': 'ceph-a3dfbdd8-de3c-56f7-9997-9a9b5f483001'})  2026-01-17 00:46:04.171630 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-68934a0c-2b18-58d2-8851-459d4d664360', 'data_vg': 'ceph-68934a0c-2b18-58d2-8851-459d4d664360'})  2026-01-17 00:46:04.171640 | orchestrator | skipping: [testbed-node-5] 2026-01-17 00:46:04.171652 | orchestrator | 2026-01-17 00:46:04.171663 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2026-01-17 00:46:04.171705 | orchestrator | Saturday 17 January 2026 00:46:01 +0000 (0:00:00.164) 0:01:14.350 ****** 2026-01-17 00:46:04.171724 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-a3dfbdd8-de3c-56f7-9997-9a9b5f483001', 'data_vg': 'ceph-a3dfbdd8-de3c-56f7-9997-9a9b5f483001'})  2026-01-17 00:46:04.171743 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-68934a0c-2b18-58d2-8851-459d4d664360', 'data_vg': 'ceph-68934a0c-2b18-58d2-8851-459d4d664360'})  2026-01-17 00:46:04.171763 | orchestrator | skipping: [testbed-node-5] 2026-01-17 00:46:04.171782 | orchestrator | 2026-01-17 00:46:04.171803 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2026-01-17 00:46:04.171824 | orchestrator | Saturday 17 January 2026 00:46:01 +0000 (0:00:00.147) 0:01:14.498 ****** 2026-01-17 00:46:04.171837 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-a3dfbdd8-de3c-56f7-9997-9a9b5f483001', 'data_vg': 'ceph-a3dfbdd8-de3c-56f7-9997-9a9b5f483001'})  2026-01-17 00:46:04.171866 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-68934a0c-2b18-58d2-8851-459d4d664360', 'data_vg': 'ceph-68934a0c-2b18-58d2-8851-459d4d664360'})  2026-01-17 00:46:04.171886 | orchestrator | skipping: [testbed-node-5] 2026-01-17 00:46:04.171905 | orchestrator | 2026-01-17 00:46:04.171923 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2026-01-17 00:46:04.171942 | orchestrator | Saturday 17 January 2026 00:46:01 +0000 (0:00:00.380) 0:01:14.879 ****** 2026-01-17 00:46:04.171963 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-a3dfbdd8-de3c-56f7-9997-9a9b5f483001', 'data_vg': 'ceph-a3dfbdd8-de3c-56f7-9997-9a9b5f483001'})  2026-01-17 00:46:04.171982 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-68934a0c-2b18-58d2-8851-459d4d664360', 'data_vg': 'ceph-68934a0c-2b18-58d2-8851-459d4d664360'})  2026-01-17 00:46:04.171997 | orchestrator | skipping: [testbed-node-5] 2026-01-17 00:46:04.172009 | orchestrator | 2026-01-17 00:46:04.172020 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2026-01-17 00:46:04.172031 | orchestrator | Saturday 17 January 2026 00:46:01 +0000 (0:00:00.164) 0:01:15.044 ****** 2026-01-17 00:46:04.172042 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-a3dfbdd8-de3c-56f7-9997-9a9b5f483001', 'data_vg': 'ceph-a3dfbdd8-de3c-56f7-9997-9a9b5f483001'})  2026-01-17 00:46:04.172053 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-68934a0c-2b18-58d2-8851-459d4d664360', 'data_vg': 'ceph-68934a0c-2b18-58d2-8851-459d4d664360'})  2026-01-17 00:46:04.172063 | orchestrator | skipping: [testbed-node-5] 2026-01-17 00:46:04.172074 | orchestrator | 2026-01-17 00:46:04.172085 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2026-01-17 00:46:04.172096 | orchestrator | Saturday 17 January 2026 00:46:02 +0000 (0:00:00.195) 0:01:15.239 ****** 2026-01-17 00:46:04.172107 | orchestrator | ok: [testbed-node-5] 2026-01-17 00:46:04.172119 | orchestrator | 2026-01-17 00:46:04.172130 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2026-01-17 00:46:04.172141 | orchestrator | Saturday 17 January 2026 00:46:02 +0000 (0:00:00.533) 0:01:15.773 ****** 2026-01-17 00:46:04.172151 | orchestrator | ok: [testbed-node-5] 2026-01-17 00:46:04.172163 | orchestrator | 2026-01-17 00:46:04.172174 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2026-01-17 00:46:04.172185 | orchestrator | Saturday 17 January 2026 00:46:03 +0000 (0:00:00.540) 0:01:16.313 ****** 2026-01-17 00:46:04.172195 | orchestrator | ok: [testbed-node-5] 2026-01-17 00:46:04.172206 | orchestrator | 2026-01-17 00:46:04.172217 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2026-01-17 00:46:04.172229 | orchestrator | Saturday 17 January 2026 00:46:03 +0000 (0:00:00.144) 0:01:16.458 ****** 2026-01-17 00:46:04.172248 | orchestrator | ok: [testbed-node-5] => (item={'lv_name': 'osd-block-68934a0c-2b18-58d2-8851-459d4d664360', 'vg_name': 'ceph-68934a0c-2b18-58d2-8851-459d4d664360'}) 2026-01-17 00:46:04.172481 | orchestrator | ok: [testbed-node-5] => (item={'lv_name': 'osd-block-a3dfbdd8-de3c-56f7-9997-9a9b5f483001', 'vg_name': 'ceph-a3dfbdd8-de3c-56f7-9997-9a9b5f483001'}) 2026-01-17 00:46:04.172519 | orchestrator | 2026-01-17 00:46:04.172531 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2026-01-17 00:46:04.172543 | orchestrator | Saturday 17 January 2026 00:46:03 +0000 (0:00:00.176) 0:01:16.634 ****** 2026-01-17 00:46:04.172576 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-a3dfbdd8-de3c-56f7-9997-9a9b5f483001', 'data_vg': 'ceph-a3dfbdd8-de3c-56f7-9997-9a9b5f483001'})  2026-01-17 00:46:04.172588 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-68934a0c-2b18-58d2-8851-459d4d664360', 'data_vg': 'ceph-68934a0c-2b18-58d2-8851-459d4d664360'})  2026-01-17 00:46:04.172599 | orchestrator | skipping: [testbed-node-5] 2026-01-17 00:46:04.172610 | orchestrator | 2026-01-17 00:46:04.172621 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2026-01-17 00:46:04.172633 | orchestrator | Saturday 17 January 2026 00:46:03 +0000 (0:00:00.168) 0:01:16.802 ****** 2026-01-17 00:46:04.172644 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-a3dfbdd8-de3c-56f7-9997-9a9b5f483001', 'data_vg': 'ceph-a3dfbdd8-de3c-56f7-9997-9a9b5f483001'})  2026-01-17 00:46:04.172655 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-68934a0c-2b18-58d2-8851-459d4d664360', 'data_vg': 'ceph-68934a0c-2b18-58d2-8851-459d4d664360'})  2026-01-17 00:46:04.172666 | orchestrator | skipping: [testbed-node-5] 2026-01-17 00:46:04.172677 | orchestrator | 2026-01-17 00:46:04.172688 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2026-01-17 00:46:04.172699 | orchestrator | Saturday 17 January 2026 00:46:03 +0000 (0:00:00.149) 0:01:16.952 ****** 2026-01-17 00:46:04.172709 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-a3dfbdd8-de3c-56f7-9997-9a9b5f483001', 'data_vg': 'ceph-a3dfbdd8-de3c-56f7-9997-9a9b5f483001'})  2026-01-17 00:46:04.172720 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-68934a0c-2b18-58d2-8851-459d4d664360', 'data_vg': 'ceph-68934a0c-2b18-58d2-8851-459d4d664360'})  2026-01-17 00:46:04.172731 | orchestrator | skipping: [testbed-node-5] 2026-01-17 00:46:04.172742 | orchestrator | 2026-01-17 00:46:04.172753 | orchestrator | TASK [Print LVM report data] *************************************************** 2026-01-17 00:46:04.172763 | orchestrator | Saturday 17 January 2026 00:46:03 +0000 (0:00:00.162) 0:01:17.115 ****** 2026-01-17 00:46:04.172774 | orchestrator | ok: [testbed-node-5] => { 2026-01-17 00:46:04.172785 | orchestrator |  "lvm_report": { 2026-01-17 00:46:04.172797 | orchestrator |  "lv": [ 2026-01-17 00:46:04.172808 | orchestrator |  { 2026-01-17 00:46:04.172827 | orchestrator |  "lv_name": "osd-block-68934a0c-2b18-58d2-8851-459d4d664360", 2026-01-17 00:46:04.172840 | orchestrator |  "vg_name": "ceph-68934a0c-2b18-58d2-8851-459d4d664360" 2026-01-17 00:46:04.172850 | orchestrator |  }, 2026-01-17 00:46:04.172861 | orchestrator |  { 2026-01-17 00:46:04.172872 | orchestrator |  "lv_name": "osd-block-a3dfbdd8-de3c-56f7-9997-9a9b5f483001", 2026-01-17 00:46:04.172883 | orchestrator |  "vg_name": "ceph-a3dfbdd8-de3c-56f7-9997-9a9b5f483001" 2026-01-17 00:46:04.172894 | orchestrator |  } 2026-01-17 00:46:04.172905 | orchestrator |  ], 2026-01-17 00:46:04.172916 | orchestrator |  "pv": [ 2026-01-17 00:46:04.172926 | orchestrator |  { 2026-01-17 00:46:04.172937 | orchestrator |  "pv_name": "/dev/sdb", 2026-01-17 00:46:04.172948 | orchestrator |  "vg_name": "ceph-a3dfbdd8-de3c-56f7-9997-9a9b5f483001" 2026-01-17 00:46:04.172958 | orchestrator |  }, 2026-01-17 00:46:04.172969 | orchestrator |  { 2026-01-17 00:46:04.172980 | orchestrator |  "pv_name": "/dev/sdc", 2026-01-17 00:46:04.172991 | orchestrator |  "vg_name": "ceph-68934a0c-2b18-58d2-8851-459d4d664360" 2026-01-17 00:46:04.173001 | orchestrator |  } 2026-01-17 00:46:04.173012 | orchestrator |  ] 2026-01-17 00:46:04.173029 | orchestrator |  } 2026-01-17 00:46:04.173041 | orchestrator | } 2026-01-17 00:46:04.173052 | orchestrator | 2026-01-17 00:46:04.173062 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-17 00:46:04.173073 | orchestrator | testbed-node-3 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2026-01-17 00:46:04.173084 | orchestrator | testbed-node-4 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2026-01-17 00:46:04.173095 | orchestrator | testbed-node-5 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2026-01-17 00:46:04.173106 | orchestrator | 2026-01-17 00:46:04.173117 | orchestrator | 2026-01-17 00:46:04.173128 | orchestrator | 2026-01-17 00:46:04.173138 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-17 00:46:04.173149 | orchestrator | Saturday 17 January 2026 00:46:04 +0000 (0:00:00.161) 0:01:17.276 ****** 2026-01-17 00:46:04.173160 | orchestrator | =============================================================================== 2026-01-17 00:46:04.173171 | orchestrator | Create block VGs -------------------------------------------------------- 5.84s 2026-01-17 00:46:04.173181 | orchestrator | Create block LVs -------------------------------------------------------- 4.12s 2026-01-17 00:46:04.173192 | orchestrator | Add known partitions to the list of available block devices ------------- 1.95s 2026-01-17 00:46:04.173202 | orchestrator | Gather DB+WAL VGs with total and available size in bytes ---------------- 1.85s 2026-01-17 00:46:04.173214 | orchestrator | Gather DB VGs with total and available size in bytes -------------------- 1.76s 2026-01-17 00:46:04.173224 | orchestrator | Gather WAL VGs with total and available size in bytes ------------------- 1.65s 2026-01-17 00:46:04.173235 | orchestrator | Get list of Ceph PVs with associated VGs -------------------------------- 1.61s 2026-01-17 00:46:04.173246 | orchestrator | Get list of Ceph LVs with associated VGs -------------------------------- 1.60s 2026-01-17 00:46:04.173317 | orchestrator | Add known links to the list of available block devices ------------------ 1.52s 2026-01-17 00:46:04.604185 | orchestrator | Add known partitions to the list of available block devices ------------- 1.15s 2026-01-17 00:46:04.604338 | orchestrator | Print LVM report data --------------------------------------------------- 0.99s 2026-01-17 00:46:04.604355 | orchestrator | Add known partitions to the list of available block devices ------------- 0.97s 2026-01-17 00:46:04.604367 | orchestrator | Add known links to the list of available block devices ------------------ 0.93s 2026-01-17 00:46:04.604378 | orchestrator | Add known partitions to the list of available block devices ------------- 0.88s 2026-01-17 00:46:04.604389 | orchestrator | Get extra vars for Ceph configuration ----------------------------------- 0.78s 2026-01-17 00:46:04.604400 | orchestrator | Print 'Create DB VGs' --------------------------------------------------- 0.78s 2026-01-17 00:46:04.604411 | orchestrator | Get initial list of available block devices ----------------------------- 0.75s 2026-01-17 00:46:04.604422 | orchestrator | Print 'Create WAL LVs for ceph_db_wal_devices' -------------------------- 0.73s 2026-01-17 00:46:04.604434 | orchestrator | Add known partitions to the list of available block devices ------------- 0.71s 2026-01-17 00:46:04.604445 | orchestrator | Fail if block LV defined in lvm_volumes is missing ---------------------- 0.71s 2026-01-17 00:46:17.176882 | orchestrator | 2026-01-17 00:46:17 | INFO  | Task 7cbde353-7121-4c96-8279-05d5c681ec8d (facts) was prepared for execution. 2026-01-17 00:46:17.176982 | orchestrator | 2026-01-17 00:46:17 | INFO  | It takes a moment until task 7cbde353-7121-4c96-8279-05d5c681ec8d (facts) has been started and output is visible here. 2026-01-17 00:46:29.863490 | orchestrator | 2026-01-17 00:46:29.863577 | orchestrator | PLAY [Apply role facts] ******************************************************** 2026-01-17 00:46:29.863587 | orchestrator | 2026-01-17 00:46:29.863592 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2026-01-17 00:46:29.863597 | orchestrator | Saturday 17 January 2026 00:46:21 +0000 (0:00:00.276) 0:00:00.276 ****** 2026-01-17 00:46:29.863620 | orchestrator | ok: [testbed-manager] 2026-01-17 00:46:29.863627 | orchestrator | ok: [testbed-node-0] 2026-01-17 00:46:29.863632 | orchestrator | ok: [testbed-node-1] 2026-01-17 00:46:29.863636 | orchestrator | ok: [testbed-node-2] 2026-01-17 00:46:29.863641 | orchestrator | ok: [testbed-node-3] 2026-01-17 00:46:29.863645 | orchestrator | ok: [testbed-node-4] 2026-01-17 00:46:29.863650 | orchestrator | ok: [testbed-node-5] 2026-01-17 00:46:29.863654 | orchestrator | 2026-01-17 00:46:29.863659 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2026-01-17 00:46:29.863665 | orchestrator | Saturday 17 January 2026 00:46:22 +0000 (0:00:01.093) 0:00:01.370 ****** 2026-01-17 00:46:29.863670 | orchestrator | skipping: [testbed-manager] 2026-01-17 00:46:29.863675 | orchestrator | skipping: [testbed-node-0] 2026-01-17 00:46:29.863680 | orchestrator | skipping: [testbed-node-1] 2026-01-17 00:46:29.863684 | orchestrator | skipping: [testbed-node-2] 2026-01-17 00:46:29.863689 | orchestrator | skipping: [testbed-node-3] 2026-01-17 00:46:29.863693 | orchestrator | skipping: [testbed-node-4] 2026-01-17 00:46:29.863698 | orchestrator | skipping: [testbed-node-5] 2026-01-17 00:46:29.863702 | orchestrator | 2026-01-17 00:46:29.863707 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-01-17 00:46:29.863712 | orchestrator | 2026-01-17 00:46:29.863716 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-01-17 00:46:29.863721 | orchestrator | Saturday 17 January 2026 00:46:23 +0000 (0:00:01.184) 0:00:02.555 ****** 2026-01-17 00:46:29.863725 | orchestrator | ok: [testbed-node-1] 2026-01-17 00:46:29.863730 | orchestrator | ok: [testbed-node-0] 2026-01-17 00:46:29.863734 | orchestrator | ok: [testbed-node-2] 2026-01-17 00:46:29.863739 | orchestrator | ok: [testbed-manager] 2026-01-17 00:46:29.863743 | orchestrator | ok: [testbed-node-4] 2026-01-17 00:46:29.863748 | orchestrator | ok: [testbed-node-3] 2026-01-17 00:46:29.863752 | orchestrator | ok: [testbed-node-5] 2026-01-17 00:46:29.863757 | orchestrator | 2026-01-17 00:46:29.863761 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2026-01-17 00:46:29.863766 | orchestrator | 2026-01-17 00:46:29.863770 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2026-01-17 00:46:29.863775 | orchestrator | Saturday 17 January 2026 00:46:28 +0000 (0:00:05.008) 0:00:07.564 ****** 2026-01-17 00:46:29.863779 | orchestrator | skipping: [testbed-manager] 2026-01-17 00:46:29.863784 | orchestrator | skipping: [testbed-node-0] 2026-01-17 00:46:29.863788 | orchestrator | skipping: [testbed-node-1] 2026-01-17 00:46:29.863793 | orchestrator | skipping: [testbed-node-2] 2026-01-17 00:46:29.863797 | orchestrator | skipping: [testbed-node-3] 2026-01-17 00:46:29.863802 | orchestrator | skipping: [testbed-node-4] 2026-01-17 00:46:29.863806 | orchestrator | skipping: [testbed-node-5] 2026-01-17 00:46:29.863811 | orchestrator | 2026-01-17 00:46:29.863815 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-17 00:46:29.863820 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-17 00:46:29.863826 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-17 00:46:29.863830 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-17 00:46:29.863835 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-17 00:46:29.863839 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-17 00:46:29.863844 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-17 00:46:29.863853 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-17 00:46:29.863858 | orchestrator | 2026-01-17 00:46:29.863862 | orchestrator | 2026-01-17 00:46:29.863867 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-17 00:46:29.863871 | orchestrator | Saturday 17 January 2026 00:46:29 +0000 (0:00:00.522) 0:00:08.086 ****** 2026-01-17 00:46:29.863876 | orchestrator | =============================================================================== 2026-01-17 00:46:29.863880 | orchestrator | Gathers facts about hosts ----------------------------------------------- 5.01s 2026-01-17 00:46:29.863885 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.18s 2026-01-17 00:46:29.863890 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 1.09s 2026-01-17 00:46:29.863894 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.52s 2026-01-17 00:46:42.352013 | orchestrator | 2026-01-17 00:46:42 | INFO  | Task 99051709-d98f-419e-a21c-fc39baebd586 (frr) was prepared for execution. 2026-01-17 00:46:42.352085 | orchestrator | 2026-01-17 00:46:42 | INFO  | It takes a moment until task 99051709-d98f-419e-a21c-fc39baebd586 (frr) has been started and output is visible here. 2026-01-17 00:47:09.704147 | orchestrator | 2026-01-17 00:47:09.704226 | orchestrator | PLAY [Apply role frr] ********************************************************** 2026-01-17 00:47:09.704233 | orchestrator | 2026-01-17 00:47:09.704237 | orchestrator | TASK [osism.services.frr : Include distribution specific install tasks] ******** 2026-01-17 00:47:09.704256 | orchestrator | Saturday 17 January 2026 00:46:46 +0000 (0:00:00.231) 0:00:00.231 ****** 2026-01-17 00:47:09.704260 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/frr/tasks/install-Debian-family.yml for testbed-manager 2026-01-17 00:47:09.704266 | orchestrator | 2026-01-17 00:47:09.704270 | orchestrator | TASK [osism.services.frr : Pin frr package version] **************************** 2026-01-17 00:47:09.704274 | orchestrator | Saturday 17 January 2026 00:46:46 +0000 (0:00:00.238) 0:00:00.470 ****** 2026-01-17 00:47:09.704280 | orchestrator | changed: [testbed-manager] 2026-01-17 00:47:09.704287 | orchestrator | 2026-01-17 00:47:09.704294 | orchestrator | TASK [osism.services.frr : Install frr package] ******************************** 2026-01-17 00:47:09.704305 | orchestrator | Saturday 17 January 2026 00:46:48 +0000 (0:00:01.232) 0:00:01.702 ****** 2026-01-17 00:47:09.704357 | orchestrator | changed: [testbed-manager] 2026-01-17 00:47:09.704366 | orchestrator | 2026-01-17 00:47:09.704372 | orchestrator | TASK [osism.services.frr : Copy file: /etc/frr/vtysh.conf] ********************* 2026-01-17 00:47:09.704378 | orchestrator | Saturday 17 January 2026 00:46:59 +0000 (0:00:11.017) 0:00:12.720 ****** 2026-01-17 00:47:09.704384 | orchestrator | ok: [testbed-manager] 2026-01-17 00:47:09.704391 | orchestrator | 2026-01-17 00:47:09.704397 | orchestrator | TASK [osism.services.frr : Copy file: /etc/frr/daemons] ************************ 2026-01-17 00:47:09.704403 | orchestrator | Saturday 17 January 2026 00:47:00 +0000 (0:00:01.141) 0:00:13.861 ****** 2026-01-17 00:47:09.704409 | orchestrator | changed: [testbed-manager] 2026-01-17 00:47:09.704415 | orchestrator | 2026-01-17 00:47:09.704421 | orchestrator | TASK [osism.services.frr : Set _frr_uplinks fact] ****************************** 2026-01-17 00:47:09.704428 | orchestrator | Saturday 17 January 2026 00:47:01 +0000 (0:00:00.982) 0:00:14.844 ****** 2026-01-17 00:47:09.704434 | orchestrator | ok: [testbed-manager] 2026-01-17 00:47:09.704440 | orchestrator | 2026-01-17 00:47:09.704446 | orchestrator | TASK [osism.services.frr : Check for frr.conf file in the configuration repository] *** 2026-01-17 00:47:09.704454 | orchestrator | Saturday 17 January 2026 00:47:02 +0000 (0:00:01.241) 0:00:16.085 ****** 2026-01-17 00:47:09.704460 | orchestrator | skipping: [testbed-manager] 2026-01-17 00:47:09.704467 | orchestrator | 2026-01-17 00:47:09.704473 | orchestrator | TASK [osism.services.frr : Copy frr.conf file from the configuration repository] *** 2026-01-17 00:47:09.704480 | orchestrator | Saturday 17 January 2026 00:47:02 +0000 (0:00:00.138) 0:00:16.223 ****** 2026-01-17 00:47:09.704506 | orchestrator | skipping: [testbed-manager] 2026-01-17 00:47:09.704513 | orchestrator | 2026-01-17 00:47:09.704519 | orchestrator | TASK [osism.services.frr : Copy default frr.conf file of type k3s_cilium] ****** 2026-01-17 00:47:09.704525 | orchestrator | Saturday 17 January 2026 00:47:02 +0000 (0:00:00.159) 0:00:16.383 ****** 2026-01-17 00:47:09.704531 | orchestrator | changed: [testbed-manager] 2026-01-17 00:47:09.704536 | orchestrator | 2026-01-17 00:47:09.704542 | orchestrator | TASK [osism.services.frr : Set sysctl parameters] ****************************** 2026-01-17 00:47:09.704548 | orchestrator | Saturday 17 January 2026 00:47:03 +0000 (0:00:00.991) 0:00:17.374 ****** 2026-01-17 00:47:09.704553 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.ip_forward', 'value': 1}) 2026-01-17 00:47:09.704560 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.send_redirects', 'value': 0}) 2026-01-17 00:47:09.704567 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.accept_redirects', 'value': 0}) 2026-01-17 00:47:09.704573 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.fib_multipath_hash_policy', 'value': 1}) 2026-01-17 00:47:09.704579 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.default.ignore_routes_with_linkdown', 'value': 1}) 2026-01-17 00:47:09.704585 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.rp_filter', 'value': 2}) 2026-01-17 00:47:09.704592 | orchestrator | 2026-01-17 00:47:09.704598 | orchestrator | TASK [osism.services.frr : Manage frr service] ********************************* 2026-01-17 00:47:09.704605 | orchestrator | Saturday 17 January 2026 00:47:06 +0000 (0:00:02.318) 0:00:19.692 ****** 2026-01-17 00:47:09.704611 | orchestrator | ok: [testbed-manager] 2026-01-17 00:47:09.704617 | orchestrator | 2026-01-17 00:47:09.704623 | orchestrator | RUNNING HANDLER [osism.services.frr : Restart frr service] ********************* 2026-01-17 00:47:09.704629 | orchestrator | Saturday 17 January 2026 00:47:07 +0000 (0:00:01.698) 0:00:21.391 ****** 2026-01-17 00:47:09.704634 | orchestrator | changed: [testbed-manager] 2026-01-17 00:47:09.704640 | orchestrator | 2026-01-17 00:47:09.704646 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-17 00:47:09.704653 | orchestrator | testbed-manager : ok=10  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-17 00:47:09.704660 | orchestrator | 2026-01-17 00:47:09.704666 | orchestrator | 2026-01-17 00:47:09.704672 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-17 00:47:09.704678 | orchestrator | Saturday 17 January 2026 00:47:09 +0000 (0:00:01.469) 0:00:22.860 ****** 2026-01-17 00:47:09.704685 | orchestrator | =============================================================================== 2026-01-17 00:47:09.704691 | orchestrator | osism.services.frr : Install frr package ------------------------------- 11.02s 2026-01-17 00:47:09.704697 | orchestrator | osism.services.frr : Set sysctl parameters ------------------------------ 2.32s 2026-01-17 00:47:09.704703 | orchestrator | osism.services.frr : Manage frr service --------------------------------- 1.70s 2026-01-17 00:47:09.704710 | orchestrator | osism.services.frr : Restart frr service -------------------------------- 1.47s 2026-01-17 00:47:09.704717 | orchestrator | osism.services.frr : Set _frr_uplinks fact ------------------------------ 1.24s 2026-01-17 00:47:09.704740 | orchestrator | osism.services.frr : Pin frr package version ---------------------------- 1.23s 2026-01-17 00:47:09.704747 | orchestrator | osism.services.frr : Copy file: /etc/frr/vtysh.conf --------------------- 1.14s 2026-01-17 00:47:09.704753 | orchestrator | osism.services.frr : Copy default frr.conf file of type k3s_cilium ------ 0.99s 2026-01-17 00:47:09.704760 | orchestrator | osism.services.frr : Copy file: /etc/frr/daemons ------------------------ 0.98s 2026-01-17 00:47:09.704767 | orchestrator | osism.services.frr : Include distribution specific install tasks -------- 0.24s 2026-01-17 00:47:09.704774 | orchestrator | osism.services.frr : Copy frr.conf file from the configuration repository --- 0.16s 2026-01-17 00:47:09.704780 | orchestrator | osism.services.frr : Check for frr.conf file in the configuration repository --- 0.14s 2026-01-17 00:47:10.072581 | orchestrator | 2026-01-17 00:47:10.074459 | orchestrator | --> DEPLOY IN A NUTSHELL -- START -- Sat Jan 17 00:47:10 UTC 2026 2026-01-17 00:47:10.074530 | orchestrator | 2026-01-17 00:47:12.090008 | orchestrator | 2026-01-17 00:47:12 | INFO  | Collection nutshell is prepared for execution 2026-01-17 00:47:12.090121 | orchestrator | 2026-01-17 00:47:12 | INFO  | A [0] - dotfiles 2026-01-17 00:47:22.107860 | orchestrator | 2026-01-17 00:47:22 | INFO  | A [0] - homer 2026-01-17 00:47:22.107959 | orchestrator | 2026-01-17 00:47:22 | INFO  | A [0] - netdata 2026-01-17 00:47:22.107975 | orchestrator | 2026-01-17 00:47:22 | INFO  | A [0] - openstackclient 2026-01-17 00:47:22.107988 | orchestrator | 2026-01-17 00:47:22 | INFO  | A [0] - phpmyadmin 2026-01-17 00:47:22.108002 | orchestrator | 2026-01-17 00:47:22 | INFO  | A [0] - common 2026-01-17 00:47:22.112225 | orchestrator | 2026-01-17 00:47:22 | INFO  | A [1] -- loadbalancer 2026-01-17 00:47:22.112276 | orchestrator | 2026-01-17 00:47:22 | INFO  | A [2] --- opensearch 2026-01-17 00:47:22.112284 | orchestrator | 2026-01-17 00:47:22 | INFO  | A [2] --- mariadb-ng 2026-01-17 00:47:22.112291 | orchestrator | 2026-01-17 00:47:22 | INFO  | A [3] ---- horizon 2026-01-17 00:47:22.112298 | orchestrator | 2026-01-17 00:47:22 | INFO  | A [3] ---- keystone 2026-01-17 00:47:22.112305 | orchestrator | 2026-01-17 00:47:22 | INFO  | A [4] ----- neutron 2026-01-17 00:47:22.112472 | orchestrator | 2026-01-17 00:47:22 | INFO  | A [5] ------ wait-for-nova 2026-01-17 00:47:22.113024 | orchestrator | 2026-01-17 00:47:22 | INFO  | A [6] ------- octavia 2026-01-17 00:47:22.114915 | orchestrator | 2026-01-17 00:47:22 | INFO  | A [4] ----- barbican 2026-01-17 00:47:22.114969 | orchestrator | 2026-01-17 00:47:22 | INFO  | A [4] ----- designate 2026-01-17 00:47:22.115192 | orchestrator | 2026-01-17 00:47:22 | INFO  | A [4] ----- ironic 2026-01-17 00:47:22.115257 | orchestrator | 2026-01-17 00:47:22 | INFO  | A [4] ----- placement 2026-01-17 00:47:22.115516 | orchestrator | 2026-01-17 00:47:22 | INFO  | A [4] ----- magnum 2026-01-17 00:47:22.116578 | orchestrator | 2026-01-17 00:47:22 | INFO  | A [1] -- openvswitch 2026-01-17 00:47:22.116621 | orchestrator | 2026-01-17 00:47:22 | INFO  | A [2] --- ovn 2026-01-17 00:47:22.117093 | orchestrator | 2026-01-17 00:47:22 | INFO  | A [1] -- memcached 2026-01-17 00:47:22.117364 | orchestrator | 2026-01-17 00:47:22 | INFO  | A [1] -- redis 2026-01-17 00:47:22.117565 | orchestrator | 2026-01-17 00:47:22 | INFO  | A [1] -- rabbitmq-ng 2026-01-17 00:47:22.118099 | orchestrator | 2026-01-17 00:47:22 | INFO  | A [0] - kubernetes 2026-01-17 00:47:22.120666 | orchestrator | 2026-01-17 00:47:22 | INFO  | A [1] -- kubeconfig 2026-01-17 00:47:22.120932 | orchestrator | 2026-01-17 00:47:22 | INFO  | A [1] -- copy-kubeconfig 2026-01-17 00:47:22.121011 | orchestrator | 2026-01-17 00:47:22 | INFO  | A [0] - ceph 2026-01-17 00:47:22.123178 | orchestrator | 2026-01-17 00:47:22 | INFO  | A [1] -- ceph-pools 2026-01-17 00:47:22.123233 | orchestrator | 2026-01-17 00:47:22 | INFO  | A [2] --- copy-ceph-keys 2026-01-17 00:47:22.123351 | orchestrator | 2026-01-17 00:47:22 | INFO  | A [3] ---- cephclient 2026-01-17 00:47:22.123531 | orchestrator | 2026-01-17 00:47:22 | INFO  | A [4] ----- ceph-bootstrap-dashboard 2026-01-17 00:47:22.123539 | orchestrator | 2026-01-17 00:47:22 | INFO  | A [4] ----- wait-for-keystone 2026-01-17 00:47:22.123752 | orchestrator | 2026-01-17 00:47:22 | INFO  | A [5] ------ kolla-ceph-rgw 2026-01-17 00:47:22.123891 | orchestrator | 2026-01-17 00:47:22 | INFO  | A [5] ------ glance 2026-01-17 00:47:22.124031 | orchestrator | 2026-01-17 00:47:22 | INFO  | A [5] ------ cinder 2026-01-17 00:47:22.124110 | orchestrator | 2026-01-17 00:47:22 | INFO  | A [5] ------ nova 2026-01-17 00:47:22.124586 | orchestrator | 2026-01-17 00:47:22 | INFO  | A [4] ----- prometheus 2026-01-17 00:47:22.124695 | orchestrator | 2026-01-17 00:47:22 | INFO  | A [5] ------ grafana 2026-01-17 00:47:22.323042 | orchestrator | 2026-01-17 00:47:22 | INFO  | All tasks of the collection nutshell are prepared for execution 2026-01-17 00:47:22.323109 | orchestrator | 2026-01-17 00:47:22 | INFO  | Tasks are running in the background 2026-01-17 00:47:25.284806 | orchestrator | 2026-01-17 00:47:25 | INFO  | No task IDs specified, wait for all currently running tasks 2026-01-17 00:47:27.423963 | orchestrator | 2026-01-17 00:47:27 | INFO  | Task e8c6e5e5-e14a-4a9e-a291-774a61f042df is in state STARTED 2026-01-17 00:47:27.424175 | orchestrator | 2026-01-17 00:47:27 | INFO  | Task e2c65c51-08f5-44f0-aa44-61c900f04df3 is in state STARTED 2026-01-17 00:47:27.424915 | orchestrator | 2026-01-17 00:47:27 | INFO  | Task c3795fa1-ef41-4ad9-b5ad-666bf055dd67 is in state STARTED 2026-01-17 00:47:27.425453 | orchestrator | 2026-01-17 00:47:27 | INFO  | Task b96d2ddd-5096-4023-b992-82ba61be8885 is in state STARTED 2026-01-17 00:47:27.426126 | orchestrator | 2026-01-17 00:47:27 | INFO  | Task b26b3afb-130e-4d79-a89f-14875e617b6e is in state STARTED 2026-01-17 00:47:27.431817 | orchestrator | 2026-01-17 00:47:27 | INFO  | Task 1a06fc1e-dccf-48c7-9307-96e1c4567bb9 is in state STARTED 2026-01-17 00:47:27.432423 | orchestrator | 2026-01-17 00:47:27 | INFO  | Task 0d19915a-c713-4e80-a209-9c2366f0786e is in state STARTED 2026-01-17 00:47:27.432471 | orchestrator | 2026-01-17 00:47:27 | INFO  | Wait 1 second(s) until the next check 2026-01-17 00:47:30.553271 | orchestrator | 2026-01-17 00:47:30 | INFO  | Task e8c6e5e5-e14a-4a9e-a291-774a61f042df is in state STARTED 2026-01-17 00:47:30.553516 | orchestrator | 2026-01-17 00:47:30 | INFO  | Task e2c65c51-08f5-44f0-aa44-61c900f04df3 is in state STARTED 2026-01-17 00:47:30.553536 | orchestrator | 2026-01-17 00:47:30 | INFO  | Task c3795fa1-ef41-4ad9-b5ad-666bf055dd67 is in state STARTED 2026-01-17 00:47:30.553551 | orchestrator | 2026-01-17 00:47:30 | INFO  | Task b96d2ddd-5096-4023-b992-82ba61be8885 is in state STARTED 2026-01-17 00:47:30.553566 | orchestrator | 2026-01-17 00:47:30 | INFO  | Task b26b3afb-130e-4d79-a89f-14875e617b6e is in state STARTED 2026-01-17 00:47:30.553580 | orchestrator | 2026-01-17 00:47:30 | INFO  | Task 1a06fc1e-dccf-48c7-9307-96e1c4567bb9 is in state STARTED 2026-01-17 00:47:30.553607 | orchestrator | 2026-01-17 00:47:30 | INFO  | Task 0d19915a-c713-4e80-a209-9c2366f0786e is in state STARTED 2026-01-17 00:47:30.553623 | orchestrator | 2026-01-17 00:47:30 | INFO  | Wait 1 second(s) until the next check 2026-01-17 00:47:33.592873 | orchestrator | 2026-01-17 00:47:33 | INFO  | Task e8c6e5e5-e14a-4a9e-a291-774a61f042df is in state STARTED 2026-01-17 00:47:33.592983 | orchestrator | 2026-01-17 00:47:33 | INFO  | Task e2c65c51-08f5-44f0-aa44-61c900f04df3 is in state STARTED 2026-01-17 00:47:33.593178 | orchestrator | 2026-01-17 00:47:33 | INFO  | Task c3795fa1-ef41-4ad9-b5ad-666bf055dd67 is in state STARTED 2026-01-17 00:47:33.596622 | orchestrator | 2026-01-17 00:47:33 | INFO  | Task b96d2ddd-5096-4023-b992-82ba61be8885 is in state STARTED 2026-01-17 00:47:33.600001 | orchestrator | 2026-01-17 00:47:33 | INFO  | Task b26b3afb-130e-4d79-a89f-14875e617b6e is in state STARTED 2026-01-17 00:47:33.600578 | orchestrator | 2026-01-17 00:47:33 | INFO  | Task 1a06fc1e-dccf-48c7-9307-96e1c4567bb9 is in state STARTED 2026-01-17 00:47:33.601601 | orchestrator | 2026-01-17 00:47:33 | INFO  | Task 0d19915a-c713-4e80-a209-9c2366f0786e is in state STARTED 2026-01-17 00:47:33.602366 | orchestrator | 2026-01-17 00:47:33 | INFO  | Wait 1 second(s) until the next check 2026-01-17 00:47:36.738579 | orchestrator | 2026-01-17 00:47:36 | INFO  | Task e8c6e5e5-e14a-4a9e-a291-774a61f042df is in state STARTED 2026-01-17 00:47:36.738645 | orchestrator | 2026-01-17 00:47:36 | INFO  | Task e2c65c51-08f5-44f0-aa44-61c900f04df3 is in state STARTED 2026-01-17 00:47:36.738651 | orchestrator | 2026-01-17 00:47:36 | INFO  | Task c3795fa1-ef41-4ad9-b5ad-666bf055dd67 is in state STARTED 2026-01-17 00:47:36.738655 | orchestrator | 2026-01-17 00:47:36 | INFO  | Task b96d2ddd-5096-4023-b992-82ba61be8885 is in state STARTED 2026-01-17 00:47:36.738659 | orchestrator | 2026-01-17 00:47:36 | INFO  | Task b26b3afb-130e-4d79-a89f-14875e617b6e is in state STARTED 2026-01-17 00:47:36.738663 | orchestrator | 2026-01-17 00:47:36 | INFO  | Task 1a06fc1e-dccf-48c7-9307-96e1c4567bb9 is in state STARTED 2026-01-17 00:47:36.738667 | orchestrator | 2026-01-17 00:47:36 | INFO  | Task 0d19915a-c713-4e80-a209-9c2366f0786e is in state STARTED 2026-01-17 00:47:36.738671 | orchestrator | 2026-01-17 00:47:36 | INFO  | Wait 1 second(s) until the next check 2026-01-17 00:47:40.059987 | orchestrator | 2026-01-17 00:47:40 | INFO  | Task e8c6e5e5-e14a-4a9e-a291-774a61f042df is in state STARTED 2026-01-17 00:47:40.060051 | orchestrator | 2026-01-17 00:47:40 | INFO  | Task e2c65c51-08f5-44f0-aa44-61c900f04df3 is in state STARTED 2026-01-17 00:47:40.060060 | orchestrator | 2026-01-17 00:47:40 | INFO  | Task c3795fa1-ef41-4ad9-b5ad-666bf055dd67 is in state STARTED 2026-01-17 00:47:40.060068 | orchestrator | 2026-01-17 00:47:40 | INFO  | Task b96d2ddd-5096-4023-b992-82ba61be8885 is in state STARTED 2026-01-17 00:47:40.060074 | orchestrator | 2026-01-17 00:47:40 | INFO  | Task b26b3afb-130e-4d79-a89f-14875e617b6e is in state STARTED 2026-01-17 00:47:40.060081 | orchestrator | 2026-01-17 00:47:40 | INFO  | Task 1a06fc1e-dccf-48c7-9307-96e1c4567bb9 is in state STARTED 2026-01-17 00:47:40.060087 | orchestrator | 2026-01-17 00:47:40 | INFO  | Task 0d19915a-c713-4e80-a209-9c2366f0786e is in state STARTED 2026-01-17 00:47:40.060094 | orchestrator | 2026-01-17 00:47:40 | INFO  | Wait 1 second(s) until the next check 2026-01-17 00:47:43.341629 | orchestrator | 2026-01-17 00:47:43 | INFO  | Task e8c6e5e5-e14a-4a9e-a291-774a61f042df is in state STARTED 2026-01-17 00:47:43.341718 | orchestrator | 2026-01-17 00:47:43 | INFO  | Task e2c65c51-08f5-44f0-aa44-61c900f04df3 is in state STARTED 2026-01-17 00:47:43.341731 | orchestrator | 2026-01-17 00:47:43 | INFO  | Task c3795fa1-ef41-4ad9-b5ad-666bf055dd67 is in state STARTED 2026-01-17 00:47:43.341740 | orchestrator | 2026-01-17 00:47:43 | INFO  | Task b96d2ddd-5096-4023-b992-82ba61be8885 is in state STARTED 2026-01-17 00:47:43.341748 | orchestrator | 2026-01-17 00:47:43 | INFO  | Task b26b3afb-130e-4d79-a89f-14875e617b6e is in state STARTED 2026-01-17 00:47:43.341756 | orchestrator | 2026-01-17 00:47:43 | INFO  | Task 1a06fc1e-dccf-48c7-9307-96e1c4567bb9 is in state STARTED 2026-01-17 00:47:43.341764 | orchestrator | 2026-01-17 00:47:43 | INFO  | Task 0d19915a-c713-4e80-a209-9c2366f0786e is in state STARTED 2026-01-17 00:47:43.341773 | orchestrator | 2026-01-17 00:47:43 | INFO  | Wait 1 second(s) until the next check 2026-01-17 00:47:46.304519 | orchestrator | 2026-01-17 00:47:46.304643 | orchestrator | PLAY [Apply role geerlingguy.dotfiles] ***************************************** 2026-01-17 00:47:46.304666 | orchestrator | 2026-01-17 00:47:46.304680 | orchestrator | TASK [geerlingguy.dotfiles : Ensure dotfiles repository is cloned locally.] **** 2026-01-17 00:47:46.304696 | orchestrator | Saturday 17 January 2026 00:47:33 +0000 (0:00:00.281) 0:00:00.281 ****** 2026-01-17 00:47:46.304741 | orchestrator | changed: [testbed-node-0] 2026-01-17 00:47:46.304758 | orchestrator | changed: [testbed-manager] 2026-01-17 00:47:46.304772 | orchestrator | changed: [testbed-node-1] 2026-01-17 00:47:46.304787 | orchestrator | changed: [testbed-node-2] 2026-01-17 00:47:46.304802 | orchestrator | changed: [testbed-node-3] 2026-01-17 00:47:46.304817 | orchestrator | changed: [testbed-node-4] 2026-01-17 00:47:46.304832 | orchestrator | changed: [testbed-node-5] 2026-01-17 00:47:46.304847 | orchestrator | 2026-01-17 00:47:46.304861 | orchestrator | TASK [geerlingguy.dotfiles : Ensure all configured dotfiles are links.] ******** 2026-01-17 00:47:46.304875 | orchestrator | Saturday 17 January 2026 00:47:37 +0000 (0:00:04.127) 0:00:04.409 ****** 2026-01-17 00:47:46.304890 | orchestrator | ok: [testbed-manager] => (item=.tmux.conf) 2026-01-17 00:47:46.304905 | orchestrator | ok: [testbed-node-0] => (item=.tmux.conf) 2026-01-17 00:47:46.304919 | orchestrator | ok: [testbed-node-1] => (item=.tmux.conf) 2026-01-17 00:47:46.304934 | orchestrator | ok: [testbed-node-2] => (item=.tmux.conf) 2026-01-17 00:47:46.304949 | orchestrator | ok: [testbed-node-3] => (item=.tmux.conf) 2026-01-17 00:47:46.304964 | orchestrator | ok: [testbed-node-4] => (item=.tmux.conf) 2026-01-17 00:47:46.304978 | orchestrator | ok: [testbed-node-5] => (item=.tmux.conf) 2026-01-17 00:47:46.304994 | orchestrator | 2026-01-17 00:47:46.305010 | orchestrator | TASK [geerlingguy.dotfiles : Remove existing dotfiles file if a replacement is being linked.] *** 2026-01-17 00:47:46.305027 | orchestrator | Saturday 17 January 2026 00:47:39 +0000 (0:00:02.106) 0:00:06.519 ****** 2026-01-17 00:47:46.305049 | orchestrator | ok: [testbed-manager] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-01-17 00:47:37.952658', 'end': '2026-01-17 00:47:37.956335', 'delta': '0:00:00.003677', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-01-17 00:47:46.305081 | orchestrator | ok: [testbed-node-1] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-01-17 00:47:38.243966', 'end': '2026-01-17 00:47:38.250371', 'delta': '0:00:00.006405', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-01-17 00:47:46.305099 | orchestrator | ok: [testbed-node-0] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-01-17 00:47:38.142256', 'end': '2026-01-17 00:47:38.148775', 'delta': '0:00:00.006519', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-01-17 00:47:46.305169 | orchestrator | ok: [testbed-node-2] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-01-17 00:47:38.266615', 'end': '2026-01-17 00:47:38.274261', 'delta': '0:00:00.007646', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-01-17 00:47:46.305191 | orchestrator | ok: [testbed-node-3] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-01-17 00:47:38.433080', 'end': '2026-01-17 00:47:38.439877', 'delta': '0:00:00.006797', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-01-17 00:47:46.305207 | orchestrator | ok: [testbed-node-4] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-01-17 00:47:38.528802', 'end': '2026-01-17 00:47:38.534768', 'delta': '0:00:00.005966', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-01-17 00:47:46.305620 | orchestrator | ok: [testbed-node-5] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-01-17 00:47:38.605667', 'end': '2026-01-17 00:47:38.611630', 'delta': '0:00:00.005963', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-01-17 00:47:46.305661 | orchestrator | 2026-01-17 00:47:46.305677 | orchestrator | TASK [geerlingguy.dotfiles : Ensure parent folders of link dotfiles exist.] **** 2026-01-17 00:47:46.305692 | orchestrator | Saturday 17 January 2026 00:47:41 +0000 (0:00:02.088) 0:00:08.608 ****** 2026-01-17 00:47:46.305706 | orchestrator | ok: [testbed-manager] => (item=.tmux.conf) 2026-01-17 00:47:46.305721 | orchestrator | ok: [testbed-node-1] => (item=.tmux.conf) 2026-01-17 00:47:46.305734 | orchestrator | ok: [testbed-node-0] => (item=.tmux.conf) 2026-01-17 00:47:46.305781 | orchestrator | ok: [testbed-node-2] => (item=.tmux.conf) 2026-01-17 00:47:46.305795 | orchestrator | ok: [testbed-node-5] => (item=.tmux.conf) 2026-01-17 00:47:46.305831 | orchestrator | ok: [testbed-node-3] => (item=.tmux.conf) 2026-01-17 00:47:46.305847 | orchestrator | ok: [testbed-node-4] => (item=.tmux.conf) 2026-01-17 00:47:46.305862 | orchestrator | 2026-01-17 00:47:46.305876 | orchestrator | TASK [geerlingguy.dotfiles : Link dotfiles into home folder.] ****************** 2026-01-17 00:47:46.305890 | orchestrator | Saturday 17 January 2026 00:47:42 +0000 (0:00:01.170) 0:00:09.779 ****** 2026-01-17 00:47:46.305905 | orchestrator | changed: [testbed-manager] => (item=.tmux.conf) 2026-01-17 00:47:46.305921 | orchestrator | changed: [testbed-node-1] => (item=.tmux.conf) 2026-01-17 00:47:46.305935 | orchestrator | changed: [testbed-node-0] => (item=.tmux.conf) 2026-01-17 00:47:46.305950 | orchestrator | changed: [testbed-node-2] => (item=.tmux.conf) 2026-01-17 00:47:46.305965 | orchestrator | changed: [testbed-node-3] => (item=.tmux.conf) 2026-01-17 00:47:46.305981 | orchestrator | changed: [testbed-node-5] => (item=.tmux.conf) 2026-01-17 00:47:46.305997 | orchestrator | changed: [testbed-node-4] => (item=.tmux.conf) 2026-01-17 00:47:46.306012 | orchestrator | 2026-01-17 00:47:46.306088 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-17 00:47:46.306123 | orchestrator | testbed-manager : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-17 00:47:46.306142 | orchestrator | testbed-node-0 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-17 00:47:46.306167 | orchestrator | testbed-node-1 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-17 00:47:46.306183 | orchestrator | testbed-node-2 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-17 00:47:46.306193 | orchestrator | testbed-node-3 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-17 00:47:46.306202 | orchestrator | testbed-node-4 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-17 00:47:46.306211 | orchestrator | testbed-node-5 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-17 00:47:46.306219 | orchestrator | 2026-01-17 00:47:46.306230 | orchestrator | 2026-01-17 00:47:46.306245 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-17 00:47:46.306254 | orchestrator | Saturday 17 January 2026 00:47:45 +0000 (0:00:02.728) 0:00:12.508 ****** 2026-01-17 00:47:46.306267 | orchestrator | =============================================================================== 2026-01-17 00:47:46.306282 | orchestrator | geerlingguy.dotfiles : Ensure dotfiles repository is cloned locally. ---- 4.13s 2026-01-17 00:47:46.306297 | orchestrator | geerlingguy.dotfiles : Link dotfiles into home folder. ------------------ 2.73s 2026-01-17 00:47:46.306311 | orchestrator | geerlingguy.dotfiles : Ensure all configured dotfiles are links. -------- 2.11s 2026-01-17 00:47:46.306327 | orchestrator | geerlingguy.dotfiles : Remove existing dotfiles file if a replacement is being linked. --- 2.09s 2026-01-17 00:47:46.306366 | orchestrator | geerlingguy.dotfiles : Ensure parent folders of link dotfiles exist. ---- 1.17s 2026-01-17 00:47:46.306381 | orchestrator | 2026-01-17 00:47:46 | INFO  | Task e8c6e5e5-e14a-4a9e-a291-774a61f042df is in state STARTED 2026-01-17 00:47:46.306396 | orchestrator | 2026-01-17 00:47:46 | INFO  | Task e2c65c51-08f5-44f0-aa44-61c900f04df3 is in state STARTED 2026-01-17 00:47:46.306411 | orchestrator | 2026-01-17 00:47:46 | INFO  | Task c3795fa1-ef41-4ad9-b5ad-666bf055dd67 is in state STARTED 2026-01-17 00:47:46.306426 | orchestrator | 2026-01-17 00:47:46 | INFO  | Task b96d2ddd-5096-4023-b992-82ba61be8885 is in state SUCCESS 2026-01-17 00:47:46.306442 | orchestrator | 2026-01-17 00:47:46 | INFO  | Task b26b3afb-130e-4d79-a89f-14875e617b6e is in state STARTED 2026-01-17 00:47:46.336987 | orchestrator | 2026-01-17 00:47:46 | INFO  | Task 1a06fc1e-dccf-48c7-9307-96e1c4567bb9 is in state STARTED 2026-01-17 00:47:46.430489 | orchestrator | 2026-01-17 00:47:46 | INFO  | Task 0d19915a-c713-4e80-a209-9c2366f0786e is in state STARTED 2026-01-17 00:47:46.430570 | orchestrator | 2026-01-17 00:47:46 | INFO  | Wait 1 second(s) until the next check 2026-01-17 00:47:49.746302 | orchestrator | 2026-01-17 00:47:49 | INFO  | Task e8c6e5e5-e14a-4a9e-a291-774a61f042df is in state STARTED 2026-01-17 00:47:49.746430 | orchestrator | 2026-01-17 00:47:49 | INFO  | Task e2c65c51-08f5-44f0-aa44-61c900f04df3 is in state STARTED 2026-01-17 00:47:49.746440 | orchestrator | 2026-01-17 00:47:49 | INFO  | Task c3795fa1-ef41-4ad9-b5ad-666bf055dd67 is in state STARTED 2026-01-17 00:47:49.746447 | orchestrator | 2026-01-17 00:47:49 | INFO  | Task b26b3afb-130e-4d79-a89f-14875e617b6e is in state STARTED 2026-01-17 00:47:49.746455 | orchestrator | 2026-01-17 00:47:49 | INFO  | Task 1a06fc1e-dccf-48c7-9307-96e1c4567bb9 is in state STARTED 2026-01-17 00:47:49.746462 | orchestrator | 2026-01-17 00:47:49 | INFO  | Task 0d19915a-c713-4e80-a209-9c2366f0786e is in state STARTED 2026-01-17 00:47:49.746469 | orchestrator | 2026-01-17 00:47:49 | INFO  | Task 09185a01-5d52-4415-9a1a-56df5394b85d is in state STARTED 2026-01-17 00:47:49.746476 | orchestrator | 2026-01-17 00:47:49 | INFO  | Wait 1 second(s) until the next check 2026-01-17 00:47:52.720253 | orchestrator | 2026-01-17 00:47:52 | INFO  | Task e8c6e5e5-e14a-4a9e-a291-774a61f042df is in state STARTED 2026-01-17 00:47:52.720342 | orchestrator | 2026-01-17 00:47:52 | INFO  | Task e2c65c51-08f5-44f0-aa44-61c900f04df3 is in state STARTED 2026-01-17 00:47:52.720520 | orchestrator | 2026-01-17 00:47:52 | INFO  | Task c3795fa1-ef41-4ad9-b5ad-666bf055dd67 is in state STARTED 2026-01-17 00:47:52.720533 | orchestrator | 2026-01-17 00:47:52 | INFO  | Task b26b3afb-130e-4d79-a89f-14875e617b6e is in state STARTED 2026-01-17 00:47:52.720544 | orchestrator | 2026-01-17 00:47:52 | INFO  | Task 1a06fc1e-dccf-48c7-9307-96e1c4567bb9 is in state STARTED 2026-01-17 00:47:52.720556 | orchestrator | 2026-01-17 00:47:52 | INFO  | Task 0d19915a-c713-4e80-a209-9c2366f0786e is in state STARTED 2026-01-17 00:47:52.720567 | orchestrator | 2026-01-17 00:47:52 | INFO  | Task 09185a01-5d52-4415-9a1a-56df5394b85d is in state STARTED 2026-01-17 00:47:52.720579 | orchestrator | 2026-01-17 00:47:52 | INFO  | Wait 1 second(s) until the next check 2026-01-17 00:47:55.765180 | orchestrator | 2026-01-17 00:47:55 | INFO  | Task e8c6e5e5-e14a-4a9e-a291-774a61f042df is in state STARTED 2026-01-17 00:47:55.765267 | orchestrator | 2026-01-17 00:47:55 | INFO  | Task e2c65c51-08f5-44f0-aa44-61c900f04df3 is in state STARTED 2026-01-17 00:47:55.765278 | orchestrator | 2026-01-17 00:47:55 | INFO  | Task c3795fa1-ef41-4ad9-b5ad-666bf055dd67 is in state STARTED 2026-01-17 00:47:55.765286 | orchestrator | 2026-01-17 00:47:55 | INFO  | Task b26b3afb-130e-4d79-a89f-14875e617b6e is in state STARTED 2026-01-17 00:47:55.765294 | orchestrator | 2026-01-17 00:47:55 | INFO  | Task 1a06fc1e-dccf-48c7-9307-96e1c4567bb9 is in state STARTED 2026-01-17 00:47:55.767752 | orchestrator | 2026-01-17 00:47:55 | INFO  | Task 0d19915a-c713-4e80-a209-9c2366f0786e is in state STARTED 2026-01-17 00:47:55.767809 | orchestrator | 2026-01-17 00:47:55 | INFO  | Task 09185a01-5d52-4415-9a1a-56df5394b85d is in state STARTED 2026-01-17 00:47:55.767819 | orchestrator | 2026-01-17 00:47:55 | INFO  | Wait 1 second(s) until the next check 2026-01-17 00:47:58.803716 | orchestrator | 2026-01-17 00:47:58 | INFO  | Task e8c6e5e5-e14a-4a9e-a291-774a61f042df is in state STARTED 2026-01-17 00:47:58.805841 | orchestrator | 2026-01-17 00:47:58 | INFO  | Task e2c65c51-08f5-44f0-aa44-61c900f04df3 is in state STARTED 2026-01-17 00:47:58.806468 | orchestrator | 2026-01-17 00:47:58 | INFO  | Task c3795fa1-ef41-4ad9-b5ad-666bf055dd67 is in state STARTED 2026-01-17 00:47:58.809062 | orchestrator | 2026-01-17 00:47:58 | INFO  | Task b26b3afb-130e-4d79-a89f-14875e617b6e is in state STARTED 2026-01-17 00:47:58.811461 | orchestrator | 2026-01-17 00:47:58 | INFO  | Task 1a06fc1e-dccf-48c7-9307-96e1c4567bb9 is in state STARTED 2026-01-17 00:47:58.812118 | orchestrator | 2026-01-17 00:47:58 | INFO  | Task 0d19915a-c713-4e80-a209-9c2366f0786e is in state STARTED 2026-01-17 00:47:58.812774 | orchestrator | 2026-01-17 00:47:58 | INFO  | Task 09185a01-5d52-4415-9a1a-56df5394b85d is in state STARTED 2026-01-17 00:47:58.812803 | orchestrator | 2026-01-17 00:47:58 | INFO  | Wait 1 second(s) until the next check 2026-01-17 00:48:01.862752 | orchestrator | 2026-01-17 00:48:01 | INFO  | Task e8c6e5e5-e14a-4a9e-a291-774a61f042df is in state STARTED 2026-01-17 00:48:01.887406 | orchestrator | 2026-01-17 00:48:01 | INFO  | Task e2c65c51-08f5-44f0-aa44-61c900f04df3 is in state STARTED 2026-01-17 00:48:01.888154 | orchestrator | 2026-01-17 00:48:01 | INFO  | Task c3795fa1-ef41-4ad9-b5ad-666bf055dd67 is in state STARTED 2026-01-17 00:48:01.896576 | orchestrator | 2026-01-17 00:48:01 | INFO  | Task b26b3afb-130e-4d79-a89f-14875e617b6e is in state STARTED 2026-01-17 00:48:01.900197 | orchestrator | 2026-01-17 00:48:01 | INFO  | Task 1a06fc1e-dccf-48c7-9307-96e1c4567bb9 is in state STARTED 2026-01-17 00:48:01.905501 | orchestrator | 2026-01-17 00:48:01 | INFO  | Task 0d19915a-c713-4e80-a209-9c2366f0786e is in state STARTED 2026-01-17 00:48:01.918215 | orchestrator | 2026-01-17 00:48:01 | INFO  | Task 09185a01-5d52-4415-9a1a-56df5394b85d is in state STARTED 2026-01-17 00:48:01.918273 | orchestrator | 2026-01-17 00:48:01 | INFO  | Wait 1 second(s) until the next check 2026-01-17 00:48:04.954175 | orchestrator | 2026-01-17 00:48:04 | INFO  | Task e8c6e5e5-e14a-4a9e-a291-774a61f042df is in state STARTED 2026-01-17 00:48:04.954559 | orchestrator | 2026-01-17 00:48:04 | INFO  | Task e2c65c51-08f5-44f0-aa44-61c900f04df3 is in state STARTED 2026-01-17 00:48:04.955534 | orchestrator | 2026-01-17 00:48:04 | INFO  | Task c3795fa1-ef41-4ad9-b5ad-666bf055dd67 is in state STARTED 2026-01-17 00:48:04.956198 | orchestrator | 2026-01-17 00:48:04 | INFO  | Task b26b3afb-130e-4d79-a89f-14875e617b6e is in state STARTED 2026-01-17 00:48:04.957502 | orchestrator | 2026-01-17 00:48:04 | INFO  | Task 1a06fc1e-dccf-48c7-9307-96e1c4567bb9 is in state STARTED 2026-01-17 00:48:04.958514 | orchestrator | 2026-01-17 00:48:04 | INFO  | Task 0d19915a-c713-4e80-a209-9c2366f0786e is in state STARTED 2026-01-17 00:48:04.959757 | orchestrator | 2026-01-17 00:48:04 | INFO  | Task 09185a01-5d52-4415-9a1a-56df5394b85d is in state STARTED 2026-01-17 00:48:04.959778 | orchestrator | 2026-01-17 00:48:04 | INFO  | Wait 1 second(s) until the next check 2026-01-17 00:48:08.073290 | orchestrator | 2026-01-17 00:48:08 | INFO  | Task e8c6e5e5-e14a-4a9e-a291-774a61f042df is in state STARTED 2026-01-17 00:48:08.074465 | orchestrator | 2026-01-17 00:48:08 | INFO  | Task e2c65c51-08f5-44f0-aa44-61c900f04df3 is in state STARTED 2026-01-17 00:48:08.076865 | orchestrator | 2026-01-17 00:48:08 | INFO  | Task c3795fa1-ef41-4ad9-b5ad-666bf055dd67 is in state STARTED 2026-01-17 00:48:08.077322 | orchestrator | 2026-01-17 00:48:08 | INFO  | Task b26b3afb-130e-4d79-a89f-14875e617b6e is in state STARTED 2026-01-17 00:48:08.078701 | orchestrator | 2026-01-17 00:48:08 | INFO  | Task 1a06fc1e-dccf-48c7-9307-96e1c4567bb9 is in state STARTED 2026-01-17 00:48:08.080944 | orchestrator | 2026-01-17 00:48:08 | INFO  | Task 0d19915a-c713-4e80-a209-9c2366f0786e is in state STARTED 2026-01-17 00:48:08.081823 | orchestrator | 2026-01-17 00:48:08 | INFO  | Task 09185a01-5d52-4415-9a1a-56df5394b85d is in state STARTED 2026-01-17 00:48:08.082052 | orchestrator | 2026-01-17 00:48:08 | INFO  | Wait 1 second(s) until the next check 2026-01-17 00:48:11.260060 | orchestrator | 2026-01-17 00:48:11 | INFO  | Task e8c6e5e5-e14a-4a9e-a291-774a61f042df is in state STARTED 2026-01-17 00:48:11.260113 | orchestrator | 2026-01-17 00:48:11 | INFO  | Task e2c65c51-08f5-44f0-aa44-61c900f04df3 is in state STARTED 2026-01-17 00:48:11.260122 | orchestrator | 2026-01-17 00:48:11 | INFO  | Task c3795fa1-ef41-4ad9-b5ad-666bf055dd67 is in state STARTED 2026-01-17 00:48:11.260127 | orchestrator | 2026-01-17 00:48:11 | INFO  | Task b26b3afb-130e-4d79-a89f-14875e617b6e is in state STARTED 2026-01-17 00:48:11.260133 | orchestrator | 2026-01-17 00:48:11 | INFO  | Task 1a06fc1e-dccf-48c7-9307-96e1c4567bb9 is in state STARTED 2026-01-17 00:48:11.260140 | orchestrator | 2026-01-17 00:48:11 | INFO  | Task 0d19915a-c713-4e80-a209-9c2366f0786e is in state STARTED 2026-01-17 00:48:11.260145 | orchestrator | 2026-01-17 00:48:11 | INFO  | Task 09185a01-5d52-4415-9a1a-56df5394b85d is in state STARTED 2026-01-17 00:48:11.260151 | orchestrator | 2026-01-17 00:48:11 | INFO  | Wait 1 second(s) until the next check 2026-01-17 00:48:14.369049 | orchestrator | 2026-01-17 00:48:14 | INFO  | Task e8c6e5e5-e14a-4a9e-a291-774a61f042df is in state STARTED 2026-01-17 00:48:14.369114 | orchestrator | 2026-01-17 00:48:14 | INFO  | Task e2c65c51-08f5-44f0-aa44-61c900f04df3 is in state STARTED 2026-01-17 00:48:14.369123 | orchestrator | 2026-01-17 00:48:14 | INFO  | Task c3795fa1-ef41-4ad9-b5ad-666bf055dd67 is in state STARTED 2026-01-17 00:48:14.369131 | orchestrator | 2026-01-17 00:48:14 | INFO  | Task b26b3afb-130e-4d79-a89f-14875e617b6e is in state STARTED 2026-01-17 00:48:14.369138 | orchestrator | 2026-01-17 00:48:14 | INFO  | Task 1a06fc1e-dccf-48c7-9307-96e1c4567bb9 is in state STARTED 2026-01-17 00:48:14.369145 | orchestrator | 2026-01-17 00:48:14 | INFO  | Task 0d19915a-c713-4e80-a209-9c2366f0786e is in state STARTED 2026-01-17 00:48:14.369152 | orchestrator | 2026-01-17 00:48:14 | INFO  | Task 09185a01-5d52-4415-9a1a-56df5394b85d is in state STARTED 2026-01-17 00:48:14.369159 | orchestrator | 2026-01-17 00:48:14 | INFO  | Wait 1 second(s) until the next check 2026-01-17 00:48:17.274894 | orchestrator | 2026-01-17 00:48:17 | INFO  | Task e8c6e5e5-e14a-4a9e-a291-774a61f042df is in state STARTED 2026-01-17 00:48:17.274992 | orchestrator | 2026-01-17 00:48:17 | INFO  | Task e2c65c51-08f5-44f0-aa44-61c900f04df3 is in state STARTED 2026-01-17 00:48:17.275476 | orchestrator | 2026-01-17 00:48:17 | INFO  | Task c3795fa1-ef41-4ad9-b5ad-666bf055dd67 is in state STARTED 2026-01-17 00:48:17.276357 | orchestrator | 2026-01-17 00:48:17 | INFO  | Task b26b3afb-130e-4d79-a89f-14875e617b6e is in state STARTED 2026-01-17 00:48:17.277096 | orchestrator | 2026-01-17 00:48:17 | INFO  | Task 1a06fc1e-dccf-48c7-9307-96e1c4567bb9 is in state STARTED 2026-01-17 00:48:17.280257 | orchestrator | 2026-01-17 00:48:17 | INFO  | Task 0d19915a-c713-4e80-a209-9c2366f0786e is in state SUCCESS 2026-01-17 00:48:17.280286 | orchestrator | 2026-01-17 00:48:17 | INFO  | Task 09185a01-5d52-4415-9a1a-56df5394b85d is in state STARTED 2026-01-17 00:48:17.280292 | orchestrator | 2026-01-17 00:48:17 | INFO  | Wait 1 second(s) until the next check 2026-01-17 00:48:20.346236 | orchestrator | 2026-01-17 00:48:20 | INFO  | Task e8c6e5e5-e14a-4a9e-a291-774a61f042df is in state STARTED 2026-01-17 00:48:20.347793 | orchestrator | 2026-01-17 00:48:20 | INFO  | Task e2c65c51-08f5-44f0-aa44-61c900f04df3 is in state STARTED 2026-01-17 00:48:20.360090 | orchestrator | 2026-01-17 00:48:20 | INFO  | Task c3795fa1-ef41-4ad9-b5ad-666bf055dd67 is in state STARTED 2026-01-17 00:48:20.360253 | orchestrator | 2026-01-17 00:48:20 | INFO  | Task b26b3afb-130e-4d79-a89f-14875e617b6e is in state STARTED 2026-01-17 00:48:20.360266 | orchestrator | 2026-01-17 00:48:20 | INFO  | Task 1a06fc1e-dccf-48c7-9307-96e1c4567bb9 is in state STARTED 2026-01-17 00:48:20.360291 | orchestrator | 2026-01-17 00:48:20 | INFO  | Task 09185a01-5d52-4415-9a1a-56df5394b85d is in state STARTED 2026-01-17 00:48:20.360300 | orchestrator | 2026-01-17 00:48:20 | INFO  | Wait 1 second(s) until the next check 2026-01-17 00:48:23.392987 | orchestrator | 2026-01-17 00:48:23 | INFO  | Task e8c6e5e5-e14a-4a9e-a291-774a61f042df is in state STARTED 2026-01-17 00:48:23.394057 | orchestrator | 2026-01-17 00:48:23 | INFO  | Task e2c65c51-08f5-44f0-aa44-61c900f04df3 is in state SUCCESS 2026-01-17 00:48:23.397090 | orchestrator | 2026-01-17 00:48:23 | INFO  | Task c3795fa1-ef41-4ad9-b5ad-666bf055dd67 is in state STARTED 2026-01-17 00:48:23.400650 | orchestrator | 2026-01-17 00:48:23 | INFO  | Task b26b3afb-130e-4d79-a89f-14875e617b6e is in state STARTED 2026-01-17 00:48:23.401809 | orchestrator | 2026-01-17 00:48:23 | INFO  | Task 1a06fc1e-dccf-48c7-9307-96e1c4567bb9 is in state STARTED 2026-01-17 00:48:23.403653 | orchestrator | 2026-01-17 00:48:23 | INFO  | Task 09185a01-5d52-4415-9a1a-56df5394b85d is in state STARTED 2026-01-17 00:48:23.403714 | orchestrator | 2026-01-17 00:48:23 | INFO  | Wait 1 second(s) until the next check 2026-01-17 00:48:26.468169 | orchestrator | 2026-01-17 00:48:26 | INFO  | Task e8c6e5e5-e14a-4a9e-a291-774a61f042df is in state STARTED 2026-01-17 00:48:26.469768 | orchestrator | 2026-01-17 00:48:26 | INFO  | Task c3795fa1-ef41-4ad9-b5ad-666bf055dd67 is in state STARTED 2026-01-17 00:48:26.473192 | orchestrator | 2026-01-17 00:48:26 | INFO  | Task b26b3afb-130e-4d79-a89f-14875e617b6e is in state STARTED 2026-01-17 00:48:26.475017 | orchestrator | 2026-01-17 00:48:26 | INFO  | Task 1a06fc1e-dccf-48c7-9307-96e1c4567bb9 is in state STARTED 2026-01-17 00:48:26.477117 | orchestrator | 2026-01-17 00:48:26 | INFO  | Task 09185a01-5d52-4415-9a1a-56df5394b85d is in state STARTED 2026-01-17 00:48:26.477170 | orchestrator | 2026-01-17 00:48:26 | INFO  | Wait 1 second(s) until the next check 2026-01-17 00:48:29.528064 | orchestrator | 2026-01-17 00:48:29 | INFO  | Task e8c6e5e5-e14a-4a9e-a291-774a61f042df is in state STARTED 2026-01-17 00:48:29.530645 | orchestrator | 2026-01-17 00:48:29 | INFO  | Task c3795fa1-ef41-4ad9-b5ad-666bf055dd67 is in state STARTED 2026-01-17 00:48:29.533038 | orchestrator | 2026-01-17 00:48:29 | INFO  | Task b26b3afb-130e-4d79-a89f-14875e617b6e is in state STARTED 2026-01-17 00:48:29.535651 | orchestrator | 2026-01-17 00:48:29 | INFO  | Task 1a06fc1e-dccf-48c7-9307-96e1c4567bb9 is in state STARTED 2026-01-17 00:48:29.538081 | orchestrator | 2026-01-17 00:48:29 | INFO  | Task 09185a01-5d52-4415-9a1a-56df5394b85d is in state STARTED 2026-01-17 00:48:29.538773 | orchestrator | 2026-01-17 00:48:29 | INFO  | Wait 1 second(s) until the next check 2026-01-17 00:48:32.605623 | orchestrator | 2026-01-17 00:48:32 | INFO  | Task e8c6e5e5-e14a-4a9e-a291-774a61f042df is in state STARTED 2026-01-17 00:48:32.607080 | orchestrator | 2026-01-17 00:48:32 | INFO  | Task c3795fa1-ef41-4ad9-b5ad-666bf055dd67 is in state STARTED 2026-01-17 00:48:32.607975 | orchestrator | 2026-01-17 00:48:32 | INFO  | Task b26b3afb-130e-4d79-a89f-14875e617b6e is in state STARTED 2026-01-17 00:48:32.609054 | orchestrator | 2026-01-17 00:48:32 | INFO  | Task 1a06fc1e-dccf-48c7-9307-96e1c4567bb9 is in state STARTED 2026-01-17 00:48:32.613243 | orchestrator | 2026-01-17 00:48:32 | INFO  | Task 09185a01-5d52-4415-9a1a-56df5394b85d is in state STARTED 2026-01-17 00:48:32.613296 | orchestrator | 2026-01-17 00:48:32 | INFO  | Wait 1 second(s) until the next check 2026-01-17 00:48:35.690008 | orchestrator | 2026-01-17 00:48:35 | INFO  | Task e8c6e5e5-e14a-4a9e-a291-774a61f042df is in state STARTED 2026-01-17 00:48:35.692662 | orchestrator | 2026-01-17 00:48:35 | INFO  | Task c3795fa1-ef41-4ad9-b5ad-666bf055dd67 is in state STARTED 2026-01-17 00:48:35.696417 | orchestrator | 2026-01-17 00:48:35 | INFO  | Task b26b3afb-130e-4d79-a89f-14875e617b6e is in state STARTED 2026-01-17 00:48:35.701435 | orchestrator | 2026-01-17 00:48:35 | INFO  | Task 1a06fc1e-dccf-48c7-9307-96e1c4567bb9 is in state STARTED 2026-01-17 00:48:35.701493 | orchestrator | 2026-01-17 00:48:35 | INFO  | Task 09185a01-5d52-4415-9a1a-56df5394b85d is in state STARTED 2026-01-17 00:48:35.701500 | orchestrator | 2026-01-17 00:48:35 | INFO  | Wait 1 second(s) until the next check 2026-01-17 00:48:38.764728 | orchestrator | 2026-01-17 00:48:38 | INFO  | Task e8c6e5e5-e14a-4a9e-a291-774a61f042df is in state STARTED 2026-01-17 00:48:38.765970 | orchestrator | 2026-01-17 00:48:38 | INFO  | Task c3795fa1-ef41-4ad9-b5ad-666bf055dd67 is in state STARTED 2026-01-17 00:48:38.767718 | orchestrator | 2026-01-17 00:48:38 | INFO  | Task b26b3afb-130e-4d79-a89f-14875e617b6e is in state STARTED 2026-01-17 00:48:38.767786 | orchestrator | 2026-01-17 00:48:38 | INFO  | Task 1a06fc1e-dccf-48c7-9307-96e1c4567bb9 is in state STARTED 2026-01-17 00:48:38.767978 | orchestrator | 2026-01-17 00:48:38 | INFO  | Task 09185a01-5d52-4415-9a1a-56df5394b85d is in state STARTED 2026-01-17 00:48:38.768523 | orchestrator | 2026-01-17 00:48:38 | INFO  | Wait 1 second(s) until the next check 2026-01-17 00:48:41.838693 | orchestrator | 2026-01-17 00:48:41 | INFO  | Task e8c6e5e5-e14a-4a9e-a291-774a61f042df is in state STARTED 2026-01-17 00:48:41.839579 | orchestrator | 2026-01-17 00:48:41 | INFO  | Task c3795fa1-ef41-4ad9-b5ad-666bf055dd67 is in state STARTED 2026-01-17 00:48:41.840058 | orchestrator | 2026-01-17 00:48:41 | INFO  | Task b26b3afb-130e-4d79-a89f-14875e617b6e is in state STARTED 2026-01-17 00:48:41.842949 | orchestrator | 2026-01-17 00:48:41 | INFO  | Task 1a06fc1e-dccf-48c7-9307-96e1c4567bb9 is in state STARTED 2026-01-17 00:48:41.845002 | orchestrator | 2026-01-17 00:48:41 | INFO  | Task 09185a01-5d52-4415-9a1a-56df5394b85d is in state STARTED 2026-01-17 00:48:41.845065 | orchestrator | 2026-01-17 00:48:41 | INFO  | Wait 1 second(s) until the next check 2026-01-17 00:48:44.887194 | orchestrator | 2026-01-17 00:48:44 | INFO  | Task e8c6e5e5-e14a-4a9e-a291-774a61f042df is in state STARTED 2026-01-17 00:48:44.887264 | orchestrator | 2026-01-17 00:48:44 | INFO  | Task c3795fa1-ef41-4ad9-b5ad-666bf055dd67 is in state STARTED 2026-01-17 00:48:44.888911 | orchestrator | 2026-01-17 00:48:44 | INFO  | Task b26b3afb-130e-4d79-a89f-14875e617b6e is in state STARTED 2026-01-17 00:48:44.890871 | orchestrator | 2026-01-17 00:48:44 | INFO  | Task 1a06fc1e-dccf-48c7-9307-96e1c4567bb9 is in state STARTED 2026-01-17 00:48:44.890933 | orchestrator | 2026-01-17 00:48:44 | INFO  | Task 09185a01-5d52-4415-9a1a-56df5394b85d is in state STARTED 2026-01-17 00:48:44.890940 | orchestrator | 2026-01-17 00:48:44 | INFO  | Wait 1 second(s) until the next check 2026-01-17 00:48:47.932309 | orchestrator | 2026-01-17 00:48:47 | INFO  | Task e8c6e5e5-e14a-4a9e-a291-774a61f042df is in state STARTED 2026-01-17 00:48:47.935035 | orchestrator | 2026-01-17 00:48:47 | INFO  | Task c3795fa1-ef41-4ad9-b5ad-666bf055dd67 is in state STARTED 2026-01-17 00:48:47.938899 | orchestrator | 2026-01-17 00:48:47 | INFO  | Task b26b3afb-130e-4d79-a89f-14875e617b6e is in state STARTED 2026-01-17 00:48:47.941044 | orchestrator | 2026-01-17 00:48:47 | INFO  | Task 1a06fc1e-dccf-48c7-9307-96e1c4567bb9 is in state STARTED 2026-01-17 00:48:47.942580 | orchestrator | 2026-01-17 00:48:47 | INFO  | Task 09185a01-5d52-4415-9a1a-56df5394b85d is in state STARTED 2026-01-17 00:48:47.943261 | orchestrator | 2026-01-17 00:48:47 | INFO  | Wait 1 second(s) until the next check 2026-01-17 00:48:51.017326 | orchestrator | 2026-01-17 00:48:51 | INFO  | Task e8c6e5e5-e14a-4a9e-a291-774a61f042df is in state STARTED 2026-01-17 00:48:51.059822 | orchestrator | 2026-01-17 00:48:51 | INFO  | Task c3795fa1-ef41-4ad9-b5ad-666bf055dd67 is in state STARTED 2026-01-17 00:48:51.059904 | orchestrator | 2026-01-17 00:48:51 | INFO  | Task b26b3afb-130e-4d79-a89f-14875e617b6e is in state STARTED 2026-01-17 00:48:51.059914 | orchestrator | 2026-01-17 00:48:51 | INFO  | Task 1a06fc1e-dccf-48c7-9307-96e1c4567bb9 is in state STARTED 2026-01-17 00:48:51.059921 | orchestrator | 2026-01-17 00:48:51 | INFO  | Task 09185a01-5d52-4415-9a1a-56df5394b85d is in state STARTED 2026-01-17 00:48:51.059930 | orchestrator | 2026-01-17 00:48:51 | INFO  | Wait 1 second(s) until the next check 2026-01-17 00:48:54.092032 | orchestrator | 2026-01-17 00:48:54 | INFO  | Task e8c6e5e5-e14a-4a9e-a291-774a61f042df is in state STARTED 2026-01-17 00:48:54.092182 | orchestrator | 2026-01-17 00:48:54 | INFO  | Task c3795fa1-ef41-4ad9-b5ad-666bf055dd67 is in state STARTED 2026-01-17 00:48:54.093302 | orchestrator | 2026-01-17 00:48:54 | INFO  | Task b26b3afb-130e-4d79-a89f-14875e617b6e is in state STARTED 2026-01-17 00:48:54.094122 | orchestrator | 2026-01-17 00:48:54 | INFO  | Task 1a06fc1e-dccf-48c7-9307-96e1c4567bb9 is in state STARTED 2026-01-17 00:48:54.095024 | orchestrator | 2026-01-17 00:48:54 | INFO  | Task 09185a01-5d52-4415-9a1a-56df5394b85d is in state STARTED 2026-01-17 00:48:54.095054 | orchestrator | 2026-01-17 00:48:54 | INFO  | Wait 1 second(s) until the next check 2026-01-17 00:48:57.138936 | orchestrator | 2026-01-17 00:48:57 | INFO  | Task e8c6e5e5-e14a-4a9e-a291-774a61f042df is in state STARTED 2026-01-17 00:48:57.140687 | orchestrator | 2026-01-17 00:48:57 | INFO  | Task c3795fa1-ef41-4ad9-b5ad-666bf055dd67 is in state STARTED 2026-01-17 00:48:57.140746 | orchestrator | 2026-01-17 00:48:57 | INFO  | Task b26b3afb-130e-4d79-a89f-14875e617b6e is in state STARTED 2026-01-17 00:48:57.143132 | orchestrator | 2026-01-17 00:48:57 | INFO  | Task 1a06fc1e-dccf-48c7-9307-96e1c4567bb9 is in state STARTED 2026-01-17 00:48:57.144770 | orchestrator | 2026-01-17 00:48:57 | INFO  | Task 09185a01-5d52-4415-9a1a-56df5394b85d is in state STARTED 2026-01-17 00:48:57.144819 | orchestrator | 2026-01-17 00:48:57 | INFO  | Wait 1 second(s) until the next check 2026-01-17 00:49:00.185066 | orchestrator | 2026-01-17 00:49:00 | INFO  | Task e8c6e5e5-e14a-4a9e-a291-774a61f042df is in state STARTED 2026-01-17 00:49:00.187132 | orchestrator | 2026-01-17 00:49:00 | INFO  | Task c3795fa1-ef41-4ad9-b5ad-666bf055dd67 is in state STARTED 2026-01-17 00:49:00.188076 | orchestrator | 2026-01-17 00:49:00 | INFO  | Task b26b3afb-130e-4d79-a89f-14875e617b6e is in state SUCCESS 2026-01-17 00:49:00.188479 | orchestrator | 2026-01-17 00:49:00.188508 | orchestrator | 2026-01-17 00:49:00.188516 | orchestrator | PLAY [Apply role homer] ******************************************************** 2026-01-17 00:49:00.188525 | orchestrator | 2026-01-17 00:49:00.188535 | orchestrator | TASK [osism.services.homer : Inform about new parameter homer_url_opensearch_dashboards] *** 2026-01-17 00:49:00.188557 | orchestrator | Saturday 17 January 2026 00:47:35 +0000 (0:00:00.444) 0:00:00.444 ****** 2026-01-17 00:49:00.188594 | orchestrator | ok: [testbed-manager] => { 2026-01-17 00:49:00.188603 | orchestrator |  "msg": "The support for the homer_url_kibana has been removed. Please use the homer_url_opensearch_dashboards parameter." 2026-01-17 00:49:00.188610 | orchestrator | } 2026-01-17 00:49:00.188617 | orchestrator | 2026-01-17 00:49:00.188625 | orchestrator | TASK [osism.services.homer : Create traefik external network] ****************** 2026-01-17 00:49:00.188630 | orchestrator | Saturday 17 January 2026 00:47:35 +0000 (0:00:00.249) 0:00:00.694 ****** 2026-01-17 00:49:00.188636 | orchestrator | ok: [testbed-manager] 2026-01-17 00:49:00.188643 | orchestrator | 2026-01-17 00:49:00.188649 | orchestrator | TASK [osism.services.homer : Create required directories] ********************** 2026-01-17 00:49:00.188655 | orchestrator | Saturday 17 January 2026 00:47:37 +0000 (0:00:01.589) 0:00:02.283 ****** 2026-01-17 00:49:00.188661 | orchestrator | changed: [testbed-manager] => (item=/opt/homer/configuration) 2026-01-17 00:49:00.188667 | orchestrator | ok: [testbed-manager] => (item=/opt/homer) 2026-01-17 00:49:00.188673 | orchestrator | 2026-01-17 00:49:00.188679 | orchestrator | TASK [osism.services.homer : Copy config.yml configuration file] *************** 2026-01-17 00:49:00.188685 | orchestrator | Saturday 17 January 2026 00:47:38 +0000 (0:00:01.554) 0:00:03.838 ****** 2026-01-17 00:49:00.188691 | orchestrator | changed: [testbed-manager] 2026-01-17 00:49:00.188697 | orchestrator | 2026-01-17 00:49:00.188703 | orchestrator | TASK [osism.services.homer : Copy docker-compose.yml file] ********************* 2026-01-17 00:49:00.188709 | orchestrator | Saturday 17 January 2026 00:47:41 +0000 (0:00:02.822) 0:00:06.660 ****** 2026-01-17 00:49:00.188716 | orchestrator | changed: [testbed-manager] 2026-01-17 00:49:00.188721 | orchestrator | 2026-01-17 00:49:00.188725 | orchestrator | TASK [osism.services.homer : Manage homer service] ***************************** 2026-01-17 00:49:00.188787 | orchestrator | Saturday 17 January 2026 00:47:43 +0000 (0:00:01.549) 0:00:08.209 ****** 2026-01-17 00:49:00.188793 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage homer service (10 retries left). 2026-01-17 00:49:00.188797 | orchestrator | ok: [testbed-manager] 2026-01-17 00:49:00.188800 | orchestrator | 2026-01-17 00:49:00.188804 | orchestrator | RUNNING HANDLER [osism.services.homer : Restart homer service] ***************** 2026-01-17 00:49:00.189347 | orchestrator | Saturday 17 January 2026 00:48:12 +0000 (0:00:29.181) 0:00:37.391 ****** 2026-01-17 00:49:00.189369 | orchestrator | changed: [testbed-manager] 2026-01-17 00:49:00.189376 | orchestrator | 2026-01-17 00:49:00.189383 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-17 00:49:00.189420 | orchestrator | testbed-manager : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-17 00:49:00.189426 | orchestrator | 2026-01-17 00:49:00.189430 | orchestrator | 2026-01-17 00:49:00.189434 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-17 00:49:00.189439 | orchestrator | Saturday 17 January 2026 00:48:16 +0000 (0:00:04.569) 0:00:41.960 ****** 2026-01-17 00:49:00.189443 | orchestrator | =============================================================================== 2026-01-17 00:49:00.189447 | orchestrator | osism.services.homer : Manage homer service ---------------------------- 29.18s 2026-01-17 00:49:00.189451 | orchestrator | osism.services.homer : Restart homer service ---------------------------- 4.57s 2026-01-17 00:49:00.189455 | orchestrator | osism.services.homer : Copy config.yml configuration file --------------- 2.82s 2026-01-17 00:49:00.189459 | orchestrator | osism.services.homer : Create traefik external network ------------------ 1.59s 2026-01-17 00:49:00.189463 | orchestrator | osism.services.homer : Create required directories ---------------------- 1.55s 2026-01-17 00:49:00.189467 | orchestrator | osism.services.homer : Copy docker-compose.yml file --------------------- 1.55s 2026-01-17 00:49:00.189471 | orchestrator | osism.services.homer : Inform about new parameter homer_url_opensearch_dashboards --- 0.25s 2026-01-17 00:49:00.189475 | orchestrator | 2026-01-17 00:49:00.189479 | orchestrator | 2026-01-17 00:49:00.189483 | orchestrator | PLAY [Apply role openstackclient] ********************************************** 2026-01-17 00:49:00.189496 | orchestrator | 2026-01-17 00:49:00.189500 | orchestrator | TASK [osism.services.openstackclient : Include tasks] ************************** 2026-01-17 00:49:00.189504 | orchestrator | Saturday 17 January 2026 00:47:34 +0000 (0:00:00.719) 0:00:00.719 ****** 2026-01-17 00:49:00.189509 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/openstackclient/tasks/container-Debian-family.yml for testbed-manager 2026-01-17 00:49:00.189514 | orchestrator | 2026-01-17 00:49:00.189523 | orchestrator | TASK [osism.services.openstackclient : Create required directories] ************ 2026-01-17 00:49:00.189527 | orchestrator | Saturday 17 January 2026 00:47:35 +0000 (0:00:00.472) 0:00:01.191 ****** 2026-01-17 00:49:00.189531 | orchestrator | changed: [testbed-manager] => (item=/opt/configuration/environments/openstack) 2026-01-17 00:49:00.189535 | orchestrator | changed: [testbed-manager] => (item=/opt/openstackclient/data) 2026-01-17 00:49:00.189539 | orchestrator | ok: [testbed-manager] => (item=/opt/openstackclient) 2026-01-17 00:49:00.189543 | orchestrator | 2026-01-17 00:49:00.189547 | orchestrator | TASK [osism.services.openstackclient : Copy docker-compose.yml file] *********** 2026-01-17 00:49:00.189551 | orchestrator | Saturday 17 January 2026 00:47:36 +0000 (0:00:01.698) 0:00:02.890 ****** 2026-01-17 00:49:00.189605 | orchestrator | changed: [testbed-manager] 2026-01-17 00:49:00.189612 | orchestrator | 2026-01-17 00:49:00.189618 | orchestrator | TASK [osism.services.openstackclient : Manage openstackclient service] ********* 2026-01-17 00:49:00.189625 | orchestrator | Saturday 17 January 2026 00:47:39 +0000 (0:00:02.282) 0:00:05.173 ****** 2026-01-17 00:49:00.189648 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage openstackclient service (10 retries left). 2026-01-17 00:49:00.189655 | orchestrator | ok: [testbed-manager] 2026-01-17 00:49:00.189662 | orchestrator | 2026-01-17 00:49:00.189668 | orchestrator | TASK [osism.services.openstackclient : Copy openstack wrapper script] ********** 2026-01-17 00:49:00.189673 | orchestrator | Saturday 17 January 2026 00:48:15 +0000 (0:00:36.531) 0:00:41.704 ****** 2026-01-17 00:49:00.189679 | orchestrator | changed: [testbed-manager] 2026-01-17 00:49:00.189684 | orchestrator | 2026-01-17 00:49:00.189690 | orchestrator | TASK [osism.services.openstackclient : Remove ospurge wrapper script] ********** 2026-01-17 00:49:00.189695 | orchestrator | Saturday 17 January 2026 00:48:17 +0000 (0:00:01.958) 0:00:43.663 ****** 2026-01-17 00:49:00.189701 | orchestrator | ok: [testbed-manager] 2026-01-17 00:49:00.189707 | orchestrator | 2026-01-17 00:49:00.189712 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Restart openstackclient service] *** 2026-01-17 00:49:00.189718 | orchestrator | Saturday 17 January 2026 00:48:18 +0000 (0:00:00.709) 0:00:44.372 ****** 2026-01-17 00:49:00.189723 | orchestrator | changed: [testbed-manager] 2026-01-17 00:49:00.189729 | orchestrator | 2026-01-17 00:49:00.189735 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Ensure that all containers are up] *** 2026-01-17 00:49:00.189741 | orchestrator | Saturday 17 January 2026 00:48:19 +0000 (0:00:01.532) 0:00:45.905 ****** 2026-01-17 00:49:00.189748 | orchestrator | changed: [testbed-manager] 2026-01-17 00:49:00.189754 | orchestrator | 2026-01-17 00:49:00.189759 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Wait for an healthy service] *** 2026-01-17 00:49:00.189762 | orchestrator | Saturday 17 January 2026 00:48:21 +0000 (0:00:01.081) 0:00:46.987 ****** 2026-01-17 00:49:00.189766 | orchestrator | changed: [testbed-manager] 2026-01-17 00:49:00.189770 | orchestrator | 2026-01-17 00:49:00.189774 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Copy bash completion script] *** 2026-01-17 00:49:00.189777 | orchestrator | Saturday 17 January 2026 00:48:21 +0000 (0:00:00.580) 0:00:47.567 ****** 2026-01-17 00:49:00.189781 | orchestrator | ok: [testbed-manager] 2026-01-17 00:49:00.189785 | orchestrator | 2026-01-17 00:49:00.189789 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-17 00:49:00.189792 | orchestrator | testbed-manager : ok=10  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-17 00:49:00.189796 | orchestrator | 2026-01-17 00:49:00.189800 | orchestrator | 2026-01-17 00:49:00.189810 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-17 00:49:00.189813 | orchestrator | Saturday 17 January 2026 00:48:21 +0000 (0:00:00.352) 0:00:47.919 ****** 2026-01-17 00:49:00.189817 | orchestrator | =============================================================================== 2026-01-17 00:49:00.189821 | orchestrator | osism.services.openstackclient : Manage openstackclient service -------- 36.53s 2026-01-17 00:49:00.189824 | orchestrator | osism.services.openstackclient : Copy docker-compose.yml file ----------- 2.28s 2026-01-17 00:49:00.189828 | orchestrator | osism.services.openstackclient : Copy openstack wrapper script ---------- 1.96s 2026-01-17 00:49:00.189832 | orchestrator | osism.services.openstackclient : Create required directories ------------ 1.70s 2026-01-17 00:49:00.189836 | orchestrator | osism.services.openstackclient : Restart openstackclient service -------- 1.53s 2026-01-17 00:49:00.189839 | orchestrator | osism.services.openstackclient : Ensure that all containers are up ------ 1.08s 2026-01-17 00:49:00.189843 | orchestrator | osism.services.openstackclient : Remove ospurge wrapper script ---------- 0.71s 2026-01-17 00:49:00.189847 | orchestrator | osism.services.openstackclient : Wait for an healthy service ------------ 0.58s 2026-01-17 00:49:00.189850 | orchestrator | osism.services.openstackclient : Include tasks -------------------------- 0.47s 2026-01-17 00:49:00.189854 | orchestrator | osism.services.openstackclient : Copy bash completion script ------------ 0.35s 2026-01-17 00:49:00.189858 | orchestrator | 2026-01-17 00:49:00.189861 | orchestrator | 2026-01-17 00:49:00.189865 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-17 00:49:00.189869 | orchestrator | 2026-01-17 00:49:00.189872 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-17 00:49:00.189876 | orchestrator | Saturday 17 January 2026 00:47:35 +0000 (0:00:00.343) 0:00:00.343 ****** 2026-01-17 00:49:00.189880 | orchestrator | changed: [testbed-manager] => (item=enable_netdata_True) 2026-01-17 00:49:00.189883 | orchestrator | changed: [testbed-node-0] => (item=enable_netdata_True) 2026-01-17 00:49:00.189887 | orchestrator | changed: [testbed-node-1] => (item=enable_netdata_True) 2026-01-17 00:49:00.189891 | orchestrator | changed: [testbed-node-2] => (item=enable_netdata_True) 2026-01-17 00:49:00.189894 | orchestrator | changed: [testbed-node-3] => (item=enable_netdata_True) 2026-01-17 00:49:00.189898 | orchestrator | changed: [testbed-node-5] => (item=enable_netdata_True) 2026-01-17 00:49:00.189902 | orchestrator | changed: [testbed-node-4] => (item=enable_netdata_True) 2026-01-17 00:49:00.189906 | orchestrator | 2026-01-17 00:49:00.189913 | orchestrator | PLAY [Apply role netdata] ****************************************************** 2026-01-17 00:49:00.189916 | orchestrator | 2026-01-17 00:49:00.189920 | orchestrator | TASK [osism.services.netdata : Include distribution specific install tasks] **** 2026-01-17 00:49:00.189924 | orchestrator | Saturday 17 January 2026 00:47:35 +0000 (0:00:00.938) 0:00:01.282 ****** 2026-01-17 00:49:00.189935 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-01-17 00:49:00.189940 | orchestrator | 2026-01-17 00:49:00.189944 | orchestrator | TASK [osism.services.netdata : Remove old architecture-dependent repository] *** 2026-01-17 00:49:00.189948 | orchestrator | Saturday 17 January 2026 00:47:38 +0000 (0:00:02.374) 0:00:03.656 ****** 2026-01-17 00:49:00.189952 | orchestrator | ok: [testbed-manager] 2026-01-17 00:49:00.189955 | orchestrator | ok: [testbed-node-1] 2026-01-17 00:49:00.189959 | orchestrator | ok: [testbed-node-0] 2026-01-17 00:49:00.189963 | orchestrator | ok: [testbed-node-2] 2026-01-17 00:49:00.189967 | orchestrator | ok: [testbed-node-3] 2026-01-17 00:49:00.189975 | orchestrator | ok: [testbed-node-4] 2026-01-17 00:49:00.189979 | orchestrator | ok: [testbed-node-5] 2026-01-17 00:49:00.189983 | orchestrator | 2026-01-17 00:49:00.189987 | orchestrator | TASK [osism.services.netdata : Install apt-transport-https package] ************ 2026-01-17 00:49:00.189990 | orchestrator | Saturday 17 January 2026 00:47:40 +0000 (0:00:01.731) 0:00:05.388 ****** 2026-01-17 00:49:00.189997 | orchestrator | ok: [testbed-node-0] 2026-01-17 00:49:00.190001 | orchestrator | ok: [testbed-node-2] 2026-01-17 00:49:00.190004 | orchestrator | ok: [testbed-node-1] 2026-01-17 00:49:00.190008 | orchestrator | ok: [testbed-node-3] 2026-01-17 00:49:00.190012 | orchestrator | ok: [testbed-manager] 2026-01-17 00:49:00.190064 | orchestrator | ok: [testbed-node-4] 2026-01-17 00:49:00.190070 | orchestrator | ok: [testbed-node-5] 2026-01-17 00:49:00.190073 | orchestrator | 2026-01-17 00:49:00.190077 | orchestrator | TASK [osism.services.netdata : Add repository gpg key] ************************* 2026-01-17 00:49:00.190081 | orchestrator | Saturday 17 January 2026 00:47:42 +0000 (0:00:02.640) 0:00:08.029 ****** 2026-01-17 00:49:00.190085 | orchestrator | changed: [testbed-node-0] 2026-01-17 00:49:00.190088 | orchestrator | changed: [testbed-manager] 2026-01-17 00:49:00.190092 | orchestrator | changed: [testbed-node-1] 2026-01-17 00:49:00.190096 | orchestrator | changed: [testbed-node-2] 2026-01-17 00:49:00.190100 | orchestrator | changed: [testbed-node-3] 2026-01-17 00:49:00.190103 | orchestrator | changed: [testbed-node-5] 2026-01-17 00:49:00.190107 | orchestrator | changed: [testbed-node-4] 2026-01-17 00:49:00.190111 | orchestrator | 2026-01-17 00:49:00.190114 | orchestrator | TASK [osism.services.netdata : Add repository] ********************************* 2026-01-17 00:49:00.190118 | orchestrator | Saturday 17 January 2026 00:47:45 +0000 (0:00:02.738) 0:00:10.767 ****** 2026-01-17 00:49:00.190122 | orchestrator | changed: [testbed-node-1] 2026-01-17 00:49:00.190126 | orchestrator | changed: [testbed-node-0] 2026-01-17 00:49:00.190129 | orchestrator | changed: [testbed-node-2] 2026-01-17 00:49:00.190133 | orchestrator | changed: [testbed-node-3] 2026-01-17 00:49:00.190137 | orchestrator | changed: [testbed-node-4] 2026-01-17 00:49:00.190140 | orchestrator | changed: [testbed-node-5] 2026-01-17 00:49:00.190144 | orchestrator | changed: [testbed-manager] 2026-01-17 00:49:00.190148 | orchestrator | 2026-01-17 00:49:00.190151 | orchestrator | TASK [osism.services.netdata : Install package netdata] ************************ 2026-01-17 00:49:00.190155 | orchestrator | Saturday 17 January 2026 00:47:58 +0000 (0:00:12.823) 0:00:23.591 ****** 2026-01-17 00:49:00.190159 | orchestrator | changed: [testbed-node-0] 2026-01-17 00:49:00.190163 | orchestrator | changed: [testbed-node-2] 2026-01-17 00:49:00.190166 | orchestrator | changed: [testbed-node-4] 2026-01-17 00:49:00.190170 | orchestrator | changed: [testbed-node-1] 2026-01-17 00:49:00.190174 | orchestrator | changed: [testbed-node-3] 2026-01-17 00:49:00.190177 | orchestrator | changed: [testbed-node-5] 2026-01-17 00:49:00.190181 | orchestrator | changed: [testbed-manager] 2026-01-17 00:49:00.190185 | orchestrator | 2026-01-17 00:49:00.190188 | orchestrator | TASK [osism.services.netdata : Include config tasks] *************************** 2026-01-17 00:49:00.190192 | orchestrator | Saturday 17 January 2026 00:48:37 +0000 (0:00:39.367) 0:01:02.959 ****** 2026-01-17 00:49:00.190197 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/config.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-01-17 00:49:00.190202 | orchestrator | 2026-01-17 00:49:00.190206 | orchestrator | TASK [osism.services.netdata : Copy configuration files] *********************** 2026-01-17 00:49:00.190210 | orchestrator | Saturday 17 January 2026 00:48:39 +0000 (0:00:01.546) 0:01:04.505 ****** 2026-01-17 00:49:00.190213 | orchestrator | changed: [testbed-manager] => (item=netdata.conf) 2026-01-17 00:49:00.190218 | orchestrator | changed: [testbed-node-1] => (item=netdata.conf) 2026-01-17 00:49:00.190221 | orchestrator | changed: [testbed-node-0] => (item=netdata.conf) 2026-01-17 00:49:00.190225 | orchestrator | changed: [testbed-node-2] => (item=netdata.conf) 2026-01-17 00:49:00.190229 | orchestrator | changed: [testbed-node-3] => (item=netdata.conf) 2026-01-17 00:49:00.190233 | orchestrator | changed: [testbed-node-4] => (item=netdata.conf) 2026-01-17 00:49:00.190236 | orchestrator | changed: [testbed-node-5] => (item=netdata.conf) 2026-01-17 00:49:00.190240 | orchestrator | changed: [testbed-node-1] => (item=stream.conf) 2026-01-17 00:49:00.190244 | orchestrator | changed: [testbed-node-2] => (item=stream.conf) 2026-01-17 00:49:00.190251 | orchestrator | changed: [testbed-node-0] => (item=stream.conf) 2026-01-17 00:49:00.190255 | orchestrator | changed: [testbed-manager] => (item=stream.conf) 2026-01-17 00:49:00.190258 | orchestrator | changed: [testbed-node-3] => (item=stream.conf) 2026-01-17 00:49:00.190262 | orchestrator | changed: [testbed-node-4] => (item=stream.conf) 2026-01-17 00:49:00.190266 | orchestrator | changed: [testbed-node-5] => (item=stream.conf) 2026-01-17 00:49:00.190269 | orchestrator | 2026-01-17 00:49:00.190273 | orchestrator | TASK [osism.services.netdata : Retrieve /etc/netdata/.opt-out-from-anonymous-statistics status] *** 2026-01-17 00:49:00.190277 | orchestrator | Saturday 17 January 2026 00:48:44 +0000 (0:00:05.748) 0:01:10.253 ****** 2026-01-17 00:49:00.190281 | orchestrator | ok: [testbed-manager] 2026-01-17 00:49:00.190287 | orchestrator | ok: [testbed-node-0] 2026-01-17 00:49:00.190291 | orchestrator | ok: [testbed-node-1] 2026-01-17 00:49:00.190294 | orchestrator | ok: [testbed-node-2] 2026-01-17 00:49:00.190298 | orchestrator | ok: [testbed-node-3] 2026-01-17 00:49:00.190302 | orchestrator | ok: [testbed-node-4] 2026-01-17 00:49:00.190306 | orchestrator | ok: [testbed-node-5] 2026-01-17 00:49:00.190309 | orchestrator | 2026-01-17 00:49:00.190313 | orchestrator | TASK [osism.services.netdata : Opt out from anonymous statistics] ************** 2026-01-17 00:49:00.190317 | orchestrator | Saturday 17 January 2026 00:48:46 +0000 (0:00:01.189) 0:01:11.442 ****** 2026-01-17 00:49:00.190321 | orchestrator | changed: [testbed-manager] 2026-01-17 00:49:00.190324 | orchestrator | changed: [testbed-node-1] 2026-01-17 00:49:00.190328 | orchestrator | changed: [testbed-node-0] 2026-01-17 00:49:00.190332 | orchestrator | changed: [testbed-node-2] 2026-01-17 00:49:00.190335 | orchestrator | changed: [testbed-node-3] 2026-01-17 00:49:00.190339 | orchestrator | changed: [testbed-node-4] 2026-01-17 00:49:00.190343 | orchestrator | changed: [testbed-node-5] 2026-01-17 00:49:00.190346 | orchestrator | 2026-01-17 00:49:00.190350 | orchestrator | TASK [osism.services.netdata : Add netdata user to docker group] *************** 2026-01-17 00:49:00.190358 | orchestrator | Saturday 17 January 2026 00:48:47 +0000 (0:00:01.552) 0:01:12.995 ****** 2026-01-17 00:49:00.190362 | orchestrator | ok: [testbed-node-0] 2026-01-17 00:49:00.190365 | orchestrator | ok: [testbed-node-1] 2026-01-17 00:49:00.190369 | orchestrator | ok: [testbed-node-2] 2026-01-17 00:49:00.190373 | orchestrator | ok: [testbed-node-3] 2026-01-17 00:49:00.190377 | orchestrator | ok: [testbed-manager] 2026-01-17 00:49:00.190380 | orchestrator | ok: [testbed-node-4] 2026-01-17 00:49:00.190430 | orchestrator | ok: [testbed-node-5] 2026-01-17 00:49:00.190436 | orchestrator | 2026-01-17 00:49:00.190440 | orchestrator | TASK [osism.services.netdata : Manage service netdata] ************************* 2026-01-17 00:49:00.190444 | orchestrator | Saturday 17 January 2026 00:48:49 +0000 (0:00:01.658) 0:01:14.653 ****** 2026-01-17 00:49:00.190448 | orchestrator | ok: [testbed-manager] 2026-01-17 00:49:00.190451 | orchestrator | ok: [testbed-node-0] 2026-01-17 00:49:00.190455 | orchestrator | ok: [testbed-node-2] 2026-01-17 00:49:00.190459 | orchestrator | ok: [testbed-node-4] 2026-01-17 00:49:00.190462 | orchestrator | ok: [testbed-node-3] 2026-01-17 00:49:00.190466 | orchestrator | ok: [testbed-node-1] 2026-01-17 00:49:00.190470 | orchestrator | ok: [testbed-node-5] 2026-01-17 00:49:00.190473 | orchestrator | 2026-01-17 00:49:00.190477 | orchestrator | TASK [osism.services.netdata : Include host type specific tasks] *************** 2026-01-17 00:49:00.190481 | orchestrator | Saturday 17 January 2026 00:48:51 +0000 (0:00:02.478) 0:01:17.132 ****** 2026-01-17 00:49:00.190485 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/server.yml for testbed-manager 2026-01-17 00:49:00.190491 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/client.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-01-17 00:49:00.190495 | orchestrator | 2026-01-17 00:49:00.190498 | orchestrator | TASK [osism.services.netdata : Set sysctl vm.max_map_count parameter] ********** 2026-01-17 00:49:00.190502 | orchestrator | Saturday 17 January 2026 00:48:53 +0000 (0:00:01.741) 0:01:18.874 ****** 2026-01-17 00:49:00.190510 | orchestrator | changed: [testbed-manager] 2026-01-17 00:49:00.190514 | orchestrator | 2026-01-17 00:49:00.190517 | orchestrator | RUNNING HANDLER [osism.services.netdata : Restart service netdata] ************* 2026-01-17 00:49:00.190521 | orchestrator | Saturday 17 January 2026 00:48:55 +0000 (0:00:02.145) 0:01:21.019 ****** 2026-01-17 00:49:00.190525 | orchestrator | changed: [testbed-manager] 2026-01-17 00:49:00.190529 | orchestrator | changed: [testbed-node-3] 2026-01-17 00:49:00.190532 | orchestrator | changed: [testbed-node-2] 2026-01-17 00:49:00.190536 | orchestrator | changed: [testbed-node-1] 2026-01-17 00:49:00.190540 | orchestrator | changed: [testbed-node-5] 2026-01-17 00:49:00.190543 | orchestrator | changed: [testbed-node-0] 2026-01-17 00:49:00.190547 | orchestrator | changed: [testbed-node-4] 2026-01-17 00:49:00.190553 | orchestrator | 2026-01-17 00:49:00.190559 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-17 00:49:00.190565 | orchestrator | testbed-manager : ok=16  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-17 00:49:00.190573 | orchestrator | testbed-node-0 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-17 00:49:00.190583 | orchestrator | testbed-node-1 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-17 00:49:00.190589 | orchestrator | testbed-node-2 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-17 00:49:00.190595 | orchestrator | testbed-node-3 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-17 00:49:00.190601 | orchestrator | testbed-node-4 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-17 00:49:00.190606 | orchestrator | testbed-node-5 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-17 00:49:00.190613 | orchestrator | 2026-01-17 00:49:00.190619 | orchestrator | 2026-01-17 00:49:00.190625 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-17 00:49:00.190632 | orchestrator | Saturday 17 January 2026 00:48:58 +0000 (0:00:02.976) 0:01:23.995 ****** 2026-01-17 00:49:00.190638 | orchestrator | =============================================================================== 2026-01-17 00:49:00.190645 | orchestrator | osism.services.netdata : Install package netdata ----------------------- 39.37s 2026-01-17 00:49:00.190651 | orchestrator | osism.services.netdata : Add repository -------------------------------- 12.82s 2026-01-17 00:49:00.190658 | orchestrator | osism.services.netdata : Copy configuration files ----------------------- 5.75s 2026-01-17 00:49:00.190662 | orchestrator | osism.services.netdata : Restart service netdata ------------------------ 2.98s 2026-01-17 00:49:00.190665 | orchestrator | osism.services.netdata : Add repository gpg key ------------------------- 2.74s 2026-01-17 00:49:00.190669 | orchestrator | osism.services.netdata : Install apt-transport-https package ------------ 2.64s 2026-01-17 00:49:00.190673 | orchestrator | osism.services.netdata : Manage service netdata ------------------------- 2.48s 2026-01-17 00:49:00.190677 | orchestrator | osism.services.netdata : Include distribution specific install tasks ---- 2.37s 2026-01-17 00:49:00.190680 | orchestrator | osism.services.netdata : Set sysctl vm.max_map_count parameter ---------- 2.15s 2026-01-17 00:49:00.190684 | orchestrator | osism.services.netdata : Include host type specific tasks --------------- 1.74s 2026-01-17 00:49:00.190688 | orchestrator | osism.services.netdata : Remove old architecture-dependent repository --- 1.73s 2026-01-17 00:49:00.190695 | orchestrator | osism.services.netdata : Add netdata user to docker group --------------- 1.66s 2026-01-17 00:49:00.190699 | orchestrator | osism.services.netdata : Opt out from anonymous statistics -------------- 1.55s 2026-01-17 00:49:00.190702 | orchestrator | osism.services.netdata : Include config tasks --------------------------- 1.55s 2026-01-17 00:49:00.190710 | orchestrator | osism.services.netdata : Retrieve /etc/netdata/.opt-out-from-anonymous-statistics status --- 1.19s 2026-01-17 00:49:00.190714 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.94s 2026-01-17 00:49:00.190718 | orchestrator | 2026-01-17 00:49:00 | INFO  | Task 1a06fc1e-dccf-48c7-9307-96e1c4567bb9 is in state STARTED 2026-01-17 00:49:00.190806 | orchestrator | 2026-01-17 00:49:00 | INFO  | Task 09185a01-5d52-4415-9a1a-56df5394b85d is in state STARTED 2026-01-17 00:49:00.190941 | orchestrator | 2026-01-17 00:49:00 | INFO  | Wait 1 second(s) until the next check 2026-01-17 00:49:03.229310 | orchestrator | 2026-01-17 00:49:03 | INFO  | Task e8c6e5e5-e14a-4a9e-a291-774a61f042df is in state STARTED 2026-01-17 00:49:03.231136 | orchestrator | 2026-01-17 00:49:03 | INFO  | Task c3795fa1-ef41-4ad9-b5ad-666bf055dd67 is in state STARTED 2026-01-17 00:49:03.235502 | orchestrator | 2026-01-17 00:49:03 | INFO  | Task 1a06fc1e-dccf-48c7-9307-96e1c4567bb9 is in state STARTED 2026-01-17 00:49:03.236277 | orchestrator | 2026-01-17 00:49:03 | INFO  | Task 09185a01-5d52-4415-9a1a-56df5394b85d is in state STARTED 2026-01-17 00:49:03.236533 | orchestrator | 2026-01-17 00:49:03 | INFO  | Wait 1 second(s) until the next check 2026-01-17 00:49:06.324087 | orchestrator | 2026-01-17 00:49:06 | INFO  | Task e8c6e5e5-e14a-4a9e-a291-774a61f042df is in state STARTED 2026-01-17 00:49:06.324439 | orchestrator | 2026-01-17 00:49:06 | INFO  | Task c3795fa1-ef41-4ad9-b5ad-666bf055dd67 is in state STARTED 2026-01-17 00:49:06.324772 | orchestrator | 2026-01-17 00:49:06 | INFO  | Task 1a06fc1e-dccf-48c7-9307-96e1c4567bb9 is in state STARTED 2026-01-17 00:49:06.325601 | orchestrator | 2026-01-17 00:49:06 | INFO  | Task 09185a01-5d52-4415-9a1a-56df5394b85d is in state STARTED 2026-01-17 00:49:06.326085 | orchestrator | 2026-01-17 00:49:06 | INFO  | Wait 1 second(s) until the next check 2026-01-17 00:49:09.372039 | orchestrator | 2026-01-17 00:49:09 | INFO  | Task e8c6e5e5-e14a-4a9e-a291-774a61f042df is in state STARTED 2026-01-17 00:49:09.373804 | orchestrator | 2026-01-17 00:49:09 | INFO  | Task c3795fa1-ef41-4ad9-b5ad-666bf055dd67 is in state STARTED 2026-01-17 00:49:09.375191 | orchestrator | 2026-01-17 00:49:09 | INFO  | Task 1a06fc1e-dccf-48c7-9307-96e1c4567bb9 is in state STARTED 2026-01-17 00:49:09.375228 | orchestrator | 2026-01-17 00:49:09 | INFO  | Task 09185a01-5d52-4415-9a1a-56df5394b85d is in state SUCCESS 2026-01-17 00:49:09.375234 | orchestrator | 2026-01-17 00:49:09 | INFO  | Wait 1 second(s) until the next check 2026-01-17 00:49:12.436550 | orchestrator | 2026-01-17 00:49:12 | INFO  | Task e8c6e5e5-e14a-4a9e-a291-774a61f042df is in state STARTED 2026-01-17 00:49:12.440977 | orchestrator | 2026-01-17 00:49:12 | INFO  | Task c3795fa1-ef41-4ad9-b5ad-666bf055dd67 is in state STARTED 2026-01-17 00:49:12.445080 | orchestrator | 2026-01-17 00:49:12 | INFO  | Task 1a06fc1e-dccf-48c7-9307-96e1c4567bb9 is in state STARTED 2026-01-17 00:49:12.445158 | orchestrator | 2026-01-17 00:49:12 | INFO  | Wait 1 second(s) until the next check 2026-01-17 00:49:15.480427 | orchestrator | 2026-01-17 00:49:15 | INFO  | Task e8c6e5e5-e14a-4a9e-a291-774a61f042df is in state STARTED 2026-01-17 00:49:15.480798 | orchestrator | 2026-01-17 00:49:15 | INFO  | Task c3795fa1-ef41-4ad9-b5ad-666bf055dd67 is in state STARTED 2026-01-17 00:49:15.484275 | orchestrator | 2026-01-17 00:49:15 | INFO  | Task 1a06fc1e-dccf-48c7-9307-96e1c4567bb9 is in state STARTED 2026-01-17 00:49:15.484741 | orchestrator | 2026-01-17 00:49:15 | INFO  | Wait 1 second(s) until the next check 2026-01-17 00:49:18.542460 | orchestrator | 2026-01-17 00:49:18 | INFO  | Task e8c6e5e5-e14a-4a9e-a291-774a61f042df is in state STARTED 2026-01-17 00:49:18.543988 | orchestrator | 2026-01-17 00:49:18 | INFO  | Task c3795fa1-ef41-4ad9-b5ad-666bf055dd67 is in state STARTED 2026-01-17 00:49:18.544596 | orchestrator | 2026-01-17 00:49:18 | INFO  | Task 1a06fc1e-dccf-48c7-9307-96e1c4567bb9 is in state STARTED 2026-01-17 00:49:18.544839 | orchestrator | 2026-01-17 00:49:18 | INFO  | Wait 1 second(s) until the next check 2026-01-17 00:49:21.596332 | orchestrator | 2026-01-17 00:49:21 | INFO  | Task e8c6e5e5-e14a-4a9e-a291-774a61f042df is in state STARTED 2026-01-17 00:49:21.598169 | orchestrator | 2026-01-17 00:49:21 | INFO  | Task c3795fa1-ef41-4ad9-b5ad-666bf055dd67 is in state STARTED 2026-01-17 00:49:21.600906 | orchestrator | 2026-01-17 00:49:21 | INFO  | Task 1a06fc1e-dccf-48c7-9307-96e1c4567bb9 is in state STARTED 2026-01-17 00:49:21.603238 | orchestrator | 2026-01-17 00:49:21 | INFO  | Wait 1 second(s) until the next check 2026-01-17 00:49:24.655294 | orchestrator | 2026-01-17 00:49:24 | INFO  | Task e8c6e5e5-e14a-4a9e-a291-774a61f042df is in state STARTED 2026-01-17 00:49:24.657329 | orchestrator | 2026-01-17 00:49:24 | INFO  | Task c3795fa1-ef41-4ad9-b5ad-666bf055dd67 is in state STARTED 2026-01-17 00:49:24.658085 | orchestrator | 2026-01-17 00:49:24 | INFO  | Task 1a06fc1e-dccf-48c7-9307-96e1c4567bb9 is in state STARTED 2026-01-17 00:49:24.658107 | orchestrator | 2026-01-17 00:49:24 | INFO  | Wait 1 second(s) until the next check 2026-01-17 00:49:27.697124 | orchestrator | 2026-01-17 00:49:27 | INFO  | Task e8c6e5e5-e14a-4a9e-a291-774a61f042df is in state STARTED 2026-01-17 00:49:27.700821 | orchestrator | 2026-01-17 00:49:27 | INFO  | Task c3795fa1-ef41-4ad9-b5ad-666bf055dd67 is in state STARTED 2026-01-17 00:49:27.702650 | orchestrator | 2026-01-17 00:49:27 | INFO  | Task 1a06fc1e-dccf-48c7-9307-96e1c4567bb9 is in state STARTED 2026-01-17 00:49:27.703078 | orchestrator | 2026-01-17 00:49:27 | INFO  | Wait 1 second(s) until the next check 2026-01-17 00:49:30.746645 | orchestrator | 2026-01-17 00:49:30 | INFO  | Task e8c6e5e5-e14a-4a9e-a291-774a61f042df is in state STARTED 2026-01-17 00:49:30.749350 | orchestrator | 2026-01-17 00:49:30 | INFO  | Task c3795fa1-ef41-4ad9-b5ad-666bf055dd67 is in state STARTED 2026-01-17 00:49:30.752294 | orchestrator | 2026-01-17 00:49:30 | INFO  | Task 1a06fc1e-dccf-48c7-9307-96e1c4567bb9 is in state STARTED 2026-01-17 00:49:30.753103 | orchestrator | 2026-01-17 00:49:30 | INFO  | Wait 1 second(s) until the next check 2026-01-17 00:49:33.804893 | orchestrator | 2026-01-17 00:49:33 | INFO  | Task e8c6e5e5-e14a-4a9e-a291-774a61f042df is in state STARTED 2026-01-17 00:49:33.806429 | orchestrator | 2026-01-17 00:49:33 | INFO  | Task c3795fa1-ef41-4ad9-b5ad-666bf055dd67 is in state STARTED 2026-01-17 00:49:33.808915 | orchestrator | 2026-01-17 00:49:33 | INFO  | Task 1a06fc1e-dccf-48c7-9307-96e1c4567bb9 is in state STARTED 2026-01-17 00:49:33.808976 | orchestrator | 2026-01-17 00:49:33 | INFO  | Wait 1 second(s) until the next check 2026-01-17 00:49:36.845886 | orchestrator | 2026-01-17 00:49:36 | INFO  | Task e8c6e5e5-e14a-4a9e-a291-774a61f042df is in state STARTED 2026-01-17 00:49:36.846971 | orchestrator | 2026-01-17 00:49:36 | INFO  | Task c3795fa1-ef41-4ad9-b5ad-666bf055dd67 is in state STARTED 2026-01-17 00:49:36.848852 | orchestrator | 2026-01-17 00:49:36 | INFO  | Task 1a06fc1e-dccf-48c7-9307-96e1c4567bb9 is in state STARTED 2026-01-17 00:49:36.848892 | orchestrator | 2026-01-17 00:49:36 | INFO  | Wait 1 second(s) until the next check 2026-01-17 00:49:39.909863 | orchestrator | 2026-01-17 00:49:39 | INFO  | Task e8c6e5e5-e14a-4a9e-a291-774a61f042df is in state STARTED 2026-01-17 00:49:39.909977 | orchestrator | 2026-01-17 00:49:39 | INFO  | Task c3795fa1-ef41-4ad9-b5ad-666bf055dd67 is in state STARTED 2026-01-17 00:49:39.910247 | orchestrator | 2026-01-17 00:49:39 | INFO  | Task 1a06fc1e-dccf-48c7-9307-96e1c4567bb9 is in state STARTED 2026-01-17 00:49:39.910454 | orchestrator | 2026-01-17 00:49:39 | INFO  | Wait 1 second(s) until the next check 2026-01-17 00:49:42.944495 | orchestrator | 2026-01-17 00:49:42 | INFO  | Task e8c6e5e5-e14a-4a9e-a291-774a61f042df is in state STARTED 2026-01-17 00:49:42.946047 | orchestrator | 2026-01-17 00:49:42 | INFO  | Task c3795fa1-ef41-4ad9-b5ad-666bf055dd67 is in state STARTED 2026-01-17 00:49:42.947182 | orchestrator | 2026-01-17 00:49:42 | INFO  | Task 1a06fc1e-dccf-48c7-9307-96e1c4567bb9 is in state STARTED 2026-01-17 00:49:42.947200 | orchestrator | 2026-01-17 00:49:42 | INFO  | Wait 1 second(s) until the next check 2026-01-17 00:49:45.982853 | orchestrator | 2026-01-17 00:49:45 | INFO  | Task e8c6e5e5-e14a-4a9e-a291-774a61f042df is in state STARTED 2026-01-17 00:49:45.986701 | orchestrator | 2026-01-17 00:49:45 | INFO  | Task c3795fa1-ef41-4ad9-b5ad-666bf055dd67 is in state STARTED 2026-01-17 00:49:45.988319 | orchestrator | 2026-01-17 00:49:45 | INFO  | Task 1a06fc1e-dccf-48c7-9307-96e1c4567bb9 is in state STARTED 2026-01-17 00:49:45.988624 | orchestrator | 2026-01-17 00:49:45 | INFO  | Wait 1 second(s) until the next check 2026-01-17 00:49:49.039249 | orchestrator | 2026-01-17 00:49:49 | INFO  | Task e8c6e5e5-e14a-4a9e-a291-774a61f042df is in state STARTED 2026-01-17 00:49:49.042347 | orchestrator | 2026-01-17 00:49:49 | INFO  | Task c3795fa1-ef41-4ad9-b5ad-666bf055dd67 is in state STARTED 2026-01-17 00:49:49.047435 | orchestrator | 2026-01-17 00:49:49 | INFO  | Task 1a06fc1e-dccf-48c7-9307-96e1c4567bb9 is in state STARTED 2026-01-17 00:49:49.047734 | orchestrator | 2026-01-17 00:49:49 | INFO  | Wait 1 second(s) until the next check 2026-01-17 00:49:52.097037 | orchestrator | 2026-01-17 00:49:52 | INFO  | Task e8c6e5e5-e14a-4a9e-a291-774a61f042df is in state STARTED 2026-01-17 00:49:52.100804 | orchestrator | 2026-01-17 00:49:52 | INFO  | Task c3795fa1-ef41-4ad9-b5ad-666bf055dd67 is in state SUCCESS 2026-01-17 00:49:52.102910 | orchestrator | 2026-01-17 00:49:52.102971 | orchestrator | 2026-01-17 00:49:52.103361 | orchestrator | PLAY [Apply role phpmyadmin] *************************************************** 2026-01-17 00:49:52.103398 | orchestrator | 2026-01-17 00:49:52.103458 | orchestrator | TASK [osism.services.phpmyadmin : Create traefik external network] ************* 2026-01-17 00:49:52.103465 | orchestrator | Saturday 17 January 2026 00:47:51 +0000 (0:00:00.557) 0:00:00.558 ****** 2026-01-17 00:49:52.103471 | orchestrator | ok: [testbed-manager] 2026-01-17 00:49:52.103476 | orchestrator | 2026-01-17 00:49:52.103482 | orchestrator | TASK [osism.services.phpmyadmin : Create required directories] ***************** 2026-01-17 00:49:52.103492 | orchestrator | Saturday 17 January 2026 00:47:53 +0000 (0:00:02.117) 0:00:02.675 ****** 2026-01-17 00:49:52.103498 | orchestrator | changed: [testbed-manager] => (item=/opt/phpmyadmin) 2026-01-17 00:49:52.103503 | orchestrator | 2026-01-17 00:49:52.103508 | orchestrator | TASK [osism.services.phpmyadmin : Copy docker-compose.yml file] **************** 2026-01-17 00:49:52.103513 | orchestrator | Saturday 17 January 2026 00:47:53 +0000 (0:00:00.752) 0:00:03.427 ****** 2026-01-17 00:49:52.103519 | orchestrator | changed: [testbed-manager] 2026-01-17 00:49:52.103524 | orchestrator | 2026-01-17 00:49:52.103529 | orchestrator | TASK [osism.services.phpmyadmin : Manage phpmyadmin service] ******************* 2026-01-17 00:49:52.103534 | orchestrator | Saturday 17 January 2026 00:47:55 +0000 (0:00:01.337) 0:00:04.765 ****** 2026-01-17 00:49:52.103539 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage phpmyadmin service (10 retries left). 2026-01-17 00:49:52.103545 | orchestrator | ok: [testbed-manager] 2026-01-17 00:49:52.103550 | orchestrator | 2026-01-17 00:49:52.103556 | orchestrator | RUNNING HANDLER [osism.services.phpmyadmin : Restart phpmyadmin service] ******* 2026-01-17 00:49:52.103575 | orchestrator | Saturday 17 January 2026 00:49:00 +0000 (0:01:05.044) 0:01:09.809 ****** 2026-01-17 00:49:52.103581 | orchestrator | changed: [testbed-manager] 2026-01-17 00:49:52.103586 | orchestrator | 2026-01-17 00:49:52.103592 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-17 00:49:52.103596 | orchestrator | testbed-manager : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-17 00:49:52.103600 | orchestrator | 2026-01-17 00:49:52.103603 | orchestrator | 2026-01-17 00:49:52.103606 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-17 00:49:52.103609 | orchestrator | Saturday 17 January 2026 00:49:07 +0000 (0:00:07.203) 0:01:17.013 ****** 2026-01-17 00:49:52.103613 | orchestrator | =============================================================================== 2026-01-17 00:49:52.103616 | orchestrator | osism.services.phpmyadmin : Manage phpmyadmin service ------------------ 65.04s 2026-01-17 00:49:52.103619 | orchestrator | osism.services.phpmyadmin : Restart phpmyadmin service ------------------ 7.20s 2026-01-17 00:49:52.103622 | orchestrator | osism.services.phpmyadmin : Create traefik external network ------------- 2.12s 2026-01-17 00:49:52.103625 | orchestrator | osism.services.phpmyadmin : Copy docker-compose.yml file ---------------- 1.34s 2026-01-17 00:49:52.103628 | orchestrator | osism.services.phpmyadmin : Create required directories ----------------- 0.75s 2026-01-17 00:49:52.103631 | orchestrator | 2026-01-17 00:49:52.103634 | orchestrator | 2026-01-17 00:49:52.103637 | orchestrator | PLAY [Apply role common] ******************************************************* 2026-01-17 00:49:52.103641 | orchestrator | 2026-01-17 00:49:52.103644 | orchestrator | TASK [common : include_tasks] ************************************************** 2026-01-17 00:49:52.103647 | orchestrator | Saturday 17 January 2026 00:47:27 +0000 (0:00:00.276) 0:00:00.276 ****** 2026-01-17 00:49:52.103650 | orchestrator | included: /ansible/roles/common/tasks/deploy.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-01-17 00:49:52.103654 | orchestrator | 2026-01-17 00:49:52.103657 | orchestrator | TASK [common : Ensuring config directories exist] ****************************** 2026-01-17 00:49:52.103660 | orchestrator | Saturday 17 January 2026 00:47:28 +0000 (0:00:01.171) 0:00:01.448 ****** 2026-01-17 00:49:52.103663 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'cron'}, 'cron']) 2026-01-17 00:49:52.103666 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'cron'}, 'cron']) 2026-01-17 00:49:52.103669 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'cron'}, 'cron']) 2026-01-17 00:49:52.103675 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-01-17 00:49:52.103679 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-01-17 00:49:52.103684 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'cron'}, 'cron']) 2026-01-17 00:49:52.103689 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'cron'}, 'cron']) 2026-01-17 00:49:52.103696 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'cron'}, 'cron']) 2026-01-17 00:49:52.103702 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'cron'}, 'cron']) 2026-01-17 00:49:52.103707 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-01-17 00:49:52.103712 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-01-17 00:49:52.103718 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-01-17 00:49:52.103723 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-01-17 00:49:52.103728 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-01-17 00:49:52.103733 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-01-17 00:49:52.103742 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-01-17 00:49:52.103781 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-01-17 00:49:52.103788 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-01-17 00:49:52.103793 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-01-17 00:49:52.103798 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-01-17 00:49:52.103802 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-01-17 00:49:52.103807 | orchestrator | 2026-01-17 00:49:52.103854 | orchestrator | TASK [common : include_tasks] ************************************************** 2026-01-17 00:49:52.103862 | orchestrator | Saturday 17 January 2026 00:47:32 +0000 (0:00:03.966) 0:00:05.415 ****** 2026-01-17 00:49:52.103868 | orchestrator | included: /ansible/roles/common/tasks/copy-certs.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-01-17 00:49:52.103874 | orchestrator | 2026-01-17 00:49:52.103879 | orchestrator | TASK [service-cert-copy : common | Copying over extra CA certificates] ********* 2026-01-17 00:49:52.103885 | orchestrator | Saturday 17 January 2026 00:47:33 +0000 (0:00:01.196) 0:00:06.611 ****** 2026-01-17 00:49:52.103892 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-17 00:49:52.103899 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-17 00:49:52.103905 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-17 00:49:52.103915 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-17 00:49:52.103921 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-17 00:49:52.103934 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-17 00:49:52.103965 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-17 00:49:52.103972 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-17 00:49:52.103975 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-17 00:49:52.103979 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-17 00:49:52.103982 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-17 00:49:52.103988 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-17 00:49:52.103995 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-17 00:49:52.104010 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-17 00:49:52.104013 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-17 00:49:52.104017 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-17 00:49:52.104020 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-17 00:49:52.104024 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-17 00:49:52.104027 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-17 00:49:52.104032 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-17 00:49:52.104037 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-17 00:49:52.104041 | orchestrator | 2026-01-17 00:49:52.104044 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS certificate] *** 2026-01-17 00:49:52.104048 | orchestrator | Saturday 17 January 2026 00:47:38 +0000 (0:00:05.383) 0:00:11.994 ****** 2026-01-17 00:49:52.104065 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-01-17 00:49:52.104069 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-17 00:49:52.104073 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-17 00:49:52.104077 | orchestrator | skipping: [testbed-manager] 2026-01-17 00:49:52.104081 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-01-17 00:49:52.104085 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-17 00:49:52.104088 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-17 00:49:52.104095 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-01-17 00:49:52.104099 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-17 00:49:52.104107 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-17 00:49:52.104110 | orchestrator | skipping: [testbed-node-0] 2026-01-17 00:49:52.104114 | orchestrator | skipping: [testbed-node-1] 2026-01-17 00:49:52.104118 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-01-17 00:49:52.104122 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-17 00:49:52.104125 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-17 00:49:52.104129 | orchestrator | skipping: [testbed-node-2] 2026-01-17 00:49:52.104133 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-01-17 00:49:52.104139 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-17 00:49:52.104143 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-17 00:49:52.104147 | orchestrator | skipping: [testbed-node-5] 2026-01-17 00:49:52.104150 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-01-17 00:49:52.104158 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-17 00:49:52.104161 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-17 00:49:52.104166 | orchestrator | skipping: [testbed-node-3] 2026-01-17 00:49:52.104174 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-01-17 00:49:52.104181 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-17 00:49:52.104192 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-17 00:49:52.104198 | orchestrator | skipping: [testbed-node-4] 2026-01-17 00:49:52.104203 | orchestrator | 2026-01-17 00:49:52.104209 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS key] ****** 2026-01-17 00:49:52.104214 | orchestrator | Saturday 17 January 2026 00:47:40 +0000 (0:00:01.314) 0:00:13.308 ****** 2026-01-17 00:49:52.104220 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-01-17 00:49:52.104226 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-17 00:49:52.104237 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-17 00:49:52.104242 | orchestrator | skipping: [testbed-manager] 2026-01-17 00:49:52.104252 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-01-17 00:49:52.104258 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-17 00:49:52.104264 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-17 00:49:52.104273 | orchestrator | skipping: [testbed-node-1] 2026-01-17 00:49:52.104278 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-01-17 00:49:52.104287 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-17 00:49:52.104293 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-01-17 00:49:52.104302 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-17 00:49:52.104308 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-17 00:49:52.104315 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-17 00:49:52.104321 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-01-17 00:49:52.104327 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-17 00:49:52.104335 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-17 00:49:52.104344 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-01-17 00:49:52.104350 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-17 00:49:52.104356 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-17 00:49:52.104362 | orchestrator | skipping: [testbed-node-2] 2026-01-17 00:49:52.104371 | orchestrator | skipping: [testbed-node-0] 2026-01-17 00:49:52.104377 | orchestrator | skipping: [testbed-node-3] 2026-01-17 00:49:52.104383 | orchestrator | skipping: [testbed-node-4] 2026-01-17 00:49:52.104389 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-01-17 00:49:52.104394 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-17 00:49:52.104400 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-17 00:49:52.104403 | orchestrator | skipping: [testbed-node-5] 2026-01-17 00:49:52.104419 | orchestrator | 2026-01-17 00:49:52.104422 | orchestrator | TASK [common : Copying over /run subdirectories conf] ************************** 2026-01-17 00:49:52.104425 | orchestrator | Saturday 17 January 2026 00:47:43 +0000 (0:00:02.912) 0:00:16.220 ****** 2026-01-17 00:49:52.104428 | orchestrator | skipping: [testbed-node-0] 2026-01-17 00:49:52.104432 | orchestrator | skipping: [testbed-manager] 2026-01-17 00:49:52.104435 | orchestrator | skipping: [testbed-node-1] 2026-01-17 00:49:52.104438 | orchestrator | skipping: [testbed-node-2] 2026-01-17 00:49:52.104441 | orchestrator | skipping: [testbed-node-3] 2026-01-17 00:49:52.104444 | orchestrator | skipping: [testbed-node-4] 2026-01-17 00:49:52.104447 | orchestrator | skipping: [testbed-node-5] 2026-01-17 00:49:52.104450 | orchestrator | 2026-01-17 00:49:52.104453 | orchestrator | TASK [common : Restart systemd-tmpfiles] *************************************** 2026-01-17 00:49:52.104456 | orchestrator | Saturday 17 January 2026 00:47:44 +0000 (0:00:01.111) 0:00:17.332 ****** 2026-01-17 00:49:52.104460 | orchestrator | skipping: [testbed-manager] 2026-01-17 00:49:52.104463 | orchestrator | skipping: [testbed-node-0] 2026-01-17 00:49:52.104466 | orchestrator | skipping: [testbed-node-1] 2026-01-17 00:49:52.104471 | orchestrator | skipping: [testbed-node-2] 2026-01-17 00:49:52.104476 | orchestrator | skipping: [testbed-node-3] 2026-01-17 00:49:52.104484 | orchestrator | skipping: [testbed-node-4] 2026-01-17 00:49:52.104489 | orchestrator | skipping: [testbed-node-5] 2026-01-17 00:49:52.104494 | orchestrator | 2026-01-17 00:49:52.104499 | orchestrator | TASK [common : Copying over config.json files for services] ******************** 2026-01-17 00:49:52.104504 | orchestrator | Saturday 17 January 2026 00:47:45 +0000 (0:00:01.099) 0:00:18.431 ****** 2026-01-17 00:49:52.104512 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-17 00:49:52.104518 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-17 00:49:52.104528 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-17 00:49:52.104535 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-17 00:49:52.104544 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-17 00:49:52.104549 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-17 00:49:52.104555 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-17 00:49:52.104562 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-17 00:49:52.104568 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-17 00:49:52.104573 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-17 00:49:52.104582 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-17 00:49:52.104591 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-17 00:49:52.104596 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-17 00:49:52.104602 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-17 00:49:52.104606 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-17 00:49:52.104613 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-17 00:49:52.104619 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-17 00:49:52.104628 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-17 00:49:52.104637 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-17 00:49:52.104643 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-17 00:49:52.104648 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-17 00:49:52.104653 | orchestrator | 2026-01-17 00:49:52.104658 | orchestrator | TASK [common : Find custom fluentd input config files] ************************* 2026-01-17 00:49:52.104664 | orchestrator | Saturday 17 January 2026 00:47:52 +0000 (0:00:07.474) 0:00:25.905 ****** 2026-01-17 00:49:52.104669 | orchestrator | [WARNING]: Skipped 2026-01-17 00:49:52.104675 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' path due 2026-01-17 00:49:52.104681 | orchestrator | to this access issue: 2026-01-17 00:49:52.104686 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' is not a 2026-01-17 00:49:52.104692 | orchestrator | directory 2026-01-17 00:49:52.104697 | orchestrator | ok: [testbed-manager -> localhost] 2026-01-17 00:49:52.104703 | orchestrator | 2026-01-17 00:49:52.104708 | orchestrator | TASK [common : Find custom fluentd filter config files] ************************ 2026-01-17 00:49:52.104714 | orchestrator | Saturday 17 January 2026 00:47:54 +0000 (0:00:01.435) 0:00:27.341 ****** 2026-01-17 00:49:52.104719 | orchestrator | [WARNING]: Skipped 2026-01-17 00:49:52.104724 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' path due 2026-01-17 00:49:52.104729 | orchestrator | to this access issue: 2026-01-17 00:49:52.104734 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' is not a 2026-01-17 00:49:52.104739 | orchestrator | directory 2026-01-17 00:49:52.104745 | orchestrator | ok: [testbed-manager -> localhost] 2026-01-17 00:49:52.104749 | orchestrator | 2026-01-17 00:49:52.104755 | orchestrator | TASK [common : Find custom fluentd format config files] ************************ 2026-01-17 00:49:52.104760 | orchestrator | Saturday 17 January 2026 00:47:55 +0000 (0:00:00.901) 0:00:28.242 ****** 2026-01-17 00:49:52.104765 | orchestrator | [WARNING]: Skipped 2026-01-17 00:49:52.104771 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' path due 2026-01-17 00:49:52.104776 | orchestrator | to this access issue: 2026-01-17 00:49:52.104781 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' is not a 2026-01-17 00:49:52.104786 | orchestrator | directory 2026-01-17 00:49:52.104791 | orchestrator | ok: [testbed-manager -> localhost] 2026-01-17 00:49:52.104796 | orchestrator | 2026-01-17 00:49:52.104802 | orchestrator | TASK [common : Find custom fluentd output config files] ************************ 2026-01-17 00:49:52.104807 | orchestrator | Saturday 17 January 2026 00:47:56 +0000 (0:00:00.924) 0:00:29.167 ****** 2026-01-17 00:49:52.104812 | orchestrator | [WARNING]: Skipped 2026-01-17 00:49:52.104820 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' path due 2026-01-17 00:49:52.104825 | orchestrator | to this access issue: 2026-01-17 00:49:52.104835 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' is not a 2026-01-17 00:49:52.104840 | orchestrator | directory 2026-01-17 00:49:52.104845 | orchestrator | ok: [testbed-manager -> localhost] 2026-01-17 00:49:52.104850 | orchestrator | 2026-01-17 00:49:52.104856 | orchestrator | TASK [common : Copying over fluentd.conf] ************************************** 2026-01-17 00:49:52.104861 | orchestrator | Saturday 17 January 2026 00:47:57 +0000 (0:00:00.959) 0:00:30.126 ****** 2026-01-17 00:49:52.104866 | orchestrator | changed: [testbed-node-1] 2026-01-17 00:49:52.104871 | orchestrator | changed: [testbed-manager] 2026-01-17 00:49:52.104876 | orchestrator | changed: [testbed-node-2] 2026-01-17 00:49:52.104882 | orchestrator | changed: [testbed-node-4] 2026-01-17 00:49:52.104886 | orchestrator | changed: [testbed-node-0] 2026-01-17 00:49:52.104892 | orchestrator | changed: [testbed-node-3] 2026-01-17 00:49:52.104897 | orchestrator | changed: [testbed-node-5] 2026-01-17 00:49:52.104902 | orchestrator | 2026-01-17 00:49:52.104907 | orchestrator | TASK [common : Copying over cron logrotate config file] ************************ 2026-01-17 00:49:52.104913 | orchestrator | Saturday 17 January 2026 00:48:00 +0000 (0:00:03.828) 0:00:33.954 ****** 2026-01-17 00:49:52.104919 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-01-17 00:49:52.104925 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-01-17 00:49:52.104930 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-01-17 00:49:52.104939 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-01-17 00:49:52.104945 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-01-17 00:49:52.104950 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-01-17 00:49:52.104955 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-01-17 00:49:52.104960 | orchestrator | 2026-01-17 00:49:52.104965 | orchestrator | TASK [common : Ensure RabbitMQ Erlang cookie exists] *************************** 2026-01-17 00:49:52.104971 | orchestrator | Saturday 17 January 2026 00:48:04 +0000 (0:00:04.024) 0:00:37.979 ****** 2026-01-17 00:49:52.104976 | orchestrator | changed: [testbed-manager] 2026-01-17 00:49:52.104981 | orchestrator | changed: [testbed-node-1] 2026-01-17 00:49:52.104986 | orchestrator | changed: [testbed-node-0] 2026-01-17 00:49:52.104992 | orchestrator | changed: [testbed-node-2] 2026-01-17 00:49:52.104997 | orchestrator | changed: [testbed-node-3] 2026-01-17 00:49:52.105002 | orchestrator | changed: [testbed-node-4] 2026-01-17 00:49:52.105007 | orchestrator | changed: [testbed-node-5] 2026-01-17 00:49:52.105012 | orchestrator | 2026-01-17 00:49:52.105018 | orchestrator | TASK [common : Ensuring config directories have correct owner and permission] *** 2026-01-17 00:49:52.105023 | orchestrator | Saturday 17 January 2026 00:48:08 +0000 (0:00:03.831) 0:00:41.810 ****** 2026-01-17 00:49:52.105028 | orchestrator | ok: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-17 00:49:52.105034 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-17 00:49:52.105046 | orchestrator | ok: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-17 00:49:52.105054 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-17 00:49:52.105060 | orchestrator | ok: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-17 00:49:52.105072 | orchestrator | ok: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-17 00:49:52.105078 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-17 00:49:52.105084 | orchestrator | ok: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-17 00:49:52.105088 | orchestrator | ok: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-17 00:49:52.105092 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-17 00:49:52.105097 | orchestrator | ok: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-17 00:49:52.105103 | orchestrator | ok: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-17 00:49:52.105108 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-17 00:49:52.105117 | orchestrator | ok: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-17 00:49:52.105125 | orchestrator | ok: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-17 00:49:52.105131 | orchestrator | ok: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-17 00:49:52.105136 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-17 00:49:52.105145 | orchestrator | ok: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-17 00:49:52.105151 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-17 00:49:52.105159 | orchestrator | ok: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-17 00:49:52.105165 | orchestrator | ok: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-17 00:49:52.105169 | orchestrator | 2026-01-17 00:49:52.105174 | orchestrator | TASK [common : Copy rabbitmq-env.conf to kolla toolbox] ************************ 2026-01-17 00:49:52.105179 | orchestrator | Saturday 17 January 2026 00:48:11 +0000 (0:00:03.071) 0:00:44.881 ****** 2026-01-17 00:49:52.105185 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-01-17 00:49:52.105190 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-01-17 00:49:52.105196 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-01-17 00:49:52.105204 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-01-17 00:49:52.105209 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-01-17 00:49:52.105214 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-01-17 00:49:52.105219 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-01-17 00:49:52.105224 | orchestrator | 2026-01-17 00:49:52.105230 | orchestrator | TASK [common : Copy rabbitmq erl_inetrc to kolla toolbox] ********************** 2026-01-17 00:49:52.105235 | orchestrator | Saturday 17 January 2026 00:48:16 +0000 (0:00:04.200) 0:00:49.082 ****** 2026-01-17 00:49:52.105240 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-01-17 00:49:52.105245 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-01-17 00:49:52.105251 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-01-17 00:49:52.105256 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-01-17 00:49:52.105264 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-01-17 00:49:52.105269 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-01-17 00:49:52.105273 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-01-17 00:49:52.105279 | orchestrator | 2026-01-17 00:49:52.105284 | orchestrator | TASK [common : Check common containers] **************************************** 2026-01-17 00:49:52.105289 | orchestrator | Saturday 17 January 2026 00:48:18 +0000 (0:00:02.953) 0:00:52.035 ****** 2026-01-17 00:49:52.105295 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-17 00:49:52.105301 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-17 00:49:52.105307 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-17 00:49:52.105315 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-17 00:49:52.105321 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-17 00:49:52.105330 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-17 00:49:52.105335 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-17 00:49:52.105344 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-17 00:49:52.105350 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-17 00:49:52.105356 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-17 00:49:52.105365 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-17 00:49:52.105371 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-17 00:49:52.105381 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-17 00:49:52.105387 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-17 00:49:52.105396 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-17 00:49:52.105402 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-17 00:49:52.105420 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-17 00:49:52.105426 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-17 00:49:52.105433 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-17 00:49:52.105438 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-17 00:49:52.105444 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-17 00:49:52.105449 | orchestrator | 2026-01-17 00:49:52.105457 | orchestrator | TASK [common : Creating log volume] ******************************************** 2026-01-17 00:49:52.105463 | orchestrator | Saturday 17 January 2026 00:48:22 +0000 (0:00:03.483) 0:00:55.518 ****** 2026-01-17 00:49:52.105471 | orchestrator | changed: [testbed-manager] 2026-01-17 00:49:52.105476 | orchestrator | changed: [testbed-node-0] 2026-01-17 00:49:52.105481 | orchestrator | changed: [testbed-node-1] 2026-01-17 00:49:52.105486 | orchestrator | changed: [testbed-node-2] 2026-01-17 00:49:52.105491 | orchestrator | changed: [testbed-node-4] 2026-01-17 00:49:52.105496 | orchestrator | changed: [testbed-node-3] 2026-01-17 00:49:52.105501 | orchestrator | changed: [testbed-node-5] 2026-01-17 00:49:52.105506 | orchestrator | 2026-01-17 00:49:52.105511 | orchestrator | TASK [common : Link kolla_logs volume to /var/log/kolla] *********************** 2026-01-17 00:49:52.105516 | orchestrator | Saturday 17 January 2026 00:48:23 +0000 (0:00:01.431) 0:00:56.950 ****** 2026-01-17 00:49:52.105521 | orchestrator | changed: [testbed-manager] 2026-01-17 00:49:52.105526 | orchestrator | changed: [testbed-node-0] 2026-01-17 00:49:52.105531 | orchestrator | changed: [testbed-node-1] 2026-01-17 00:49:52.105535 | orchestrator | changed: [testbed-node-2] 2026-01-17 00:49:52.105541 | orchestrator | changed: [testbed-node-3] 2026-01-17 00:49:52.105546 | orchestrator | changed: [testbed-node-4] 2026-01-17 00:49:52.105551 | orchestrator | changed: [testbed-node-5] 2026-01-17 00:49:52.105556 | orchestrator | 2026-01-17 00:49:52.105561 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-01-17 00:49:52.105566 | orchestrator | Saturday 17 January 2026 00:48:25 +0000 (0:00:01.250) 0:00:58.200 ****** 2026-01-17 00:49:52.105572 | orchestrator | 2026-01-17 00:49:52.105577 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-01-17 00:49:52.105582 | orchestrator | Saturday 17 January 2026 00:48:25 +0000 (0:00:00.067) 0:00:58.268 ****** 2026-01-17 00:49:52.105587 | orchestrator | 2026-01-17 00:49:52.105592 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-01-17 00:49:52.105597 | orchestrator | Saturday 17 January 2026 00:48:25 +0000 (0:00:00.064) 0:00:58.332 ****** 2026-01-17 00:49:52.105602 | orchestrator | 2026-01-17 00:49:52.105607 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-01-17 00:49:52.105612 | orchestrator | Saturday 17 January 2026 00:48:25 +0000 (0:00:00.239) 0:00:58.572 ****** 2026-01-17 00:49:52.105617 | orchestrator | 2026-01-17 00:49:52.105622 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-01-17 00:49:52.105628 | orchestrator | Saturday 17 January 2026 00:48:25 +0000 (0:00:00.065) 0:00:58.637 ****** 2026-01-17 00:49:52.105633 | orchestrator | 2026-01-17 00:49:52.105638 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-01-17 00:49:52.105643 | orchestrator | Saturday 17 January 2026 00:48:25 +0000 (0:00:00.060) 0:00:58.697 ****** 2026-01-17 00:49:52.105648 | orchestrator | 2026-01-17 00:49:52.105653 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-01-17 00:49:52.105658 | orchestrator | Saturday 17 January 2026 00:48:25 +0000 (0:00:00.066) 0:00:58.764 ****** 2026-01-17 00:49:52.105663 | orchestrator | 2026-01-17 00:49:52.105669 | orchestrator | RUNNING HANDLER [common : Restart fluentd container] *************************** 2026-01-17 00:49:52.105674 | orchestrator | Saturday 17 January 2026 00:48:25 +0000 (0:00:00.089) 0:00:58.853 ****** 2026-01-17 00:49:52.105679 | orchestrator | changed: [testbed-node-0] 2026-01-17 00:49:52.105684 | orchestrator | changed: [testbed-node-4] 2026-01-17 00:49:52.105689 | orchestrator | changed: [testbed-node-2] 2026-01-17 00:49:52.105694 | orchestrator | changed: [testbed-node-1] 2026-01-17 00:49:52.105699 | orchestrator | changed: [testbed-node-3] 2026-01-17 00:49:52.105705 | orchestrator | changed: [testbed-manager] 2026-01-17 00:49:52.105710 | orchestrator | changed: [testbed-node-5] 2026-01-17 00:49:52.105715 | orchestrator | 2026-01-17 00:49:52.105720 | orchestrator | RUNNING HANDLER [common : Restart kolla-toolbox container] ********************* 2026-01-17 00:49:52.105725 | orchestrator | Saturday 17 January 2026 00:49:01 +0000 (0:00:35.539) 0:01:34.393 ****** 2026-01-17 00:49:52.105730 | orchestrator | changed: [testbed-node-0] 2026-01-17 00:49:52.105735 | orchestrator | changed: [testbed-node-3] 2026-01-17 00:49:52.105745 | orchestrator | changed: [testbed-node-4] 2026-01-17 00:49:52.105750 | orchestrator | changed: [testbed-node-1] 2026-01-17 00:49:52.105755 | orchestrator | changed: [testbed-manager] 2026-01-17 00:49:52.105760 | orchestrator | changed: [testbed-node-2] 2026-01-17 00:49:52.105764 | orchestrator | changed: [testbed-node-5] 2026-01-17 00:49:52.105769 | orchestrator | 2026-01-17 00:49:52.105773 | orchestrator | RUNNING HANDLER [common : Initializing toolbox container using normal user] **** 2026-01-17 00:49:52.105778 | orchestrator | Saturday 17 January 2026 00:49:38 +0000 (0:00:37.608) 0:02:12.002 ****** 2026-01-17 00:49:52.105786 | orchestrator | ok: [testbed-manager] 2026-01-17 00:49:52.105791 | orchestrator | ok: [testbed-node-0] 2026-01-17 00:49:52.105797 | orchestrator | ok: [testbed-node-1] 2026-01-17 00:49:52.105802 | orchestrator | ok: [testbed-node-2] 2026-01-17 00:49:52.105806 | orchestrator | ok: [testbed-node-3] 2026-01-17 00:49:52.105811 | orchestrator | ok: [testbed-node-4] 2026-01-17 00:49:52.105815 | orchestrator | ok: [testbed-node-5] 2026-01-17 00:49:52.105819 | orchestrator | 2026-01-17 00:49:52.105824 | orchestrator | RUNNING HANDLER [common : Restart cron container] ****************************** 2026-01-17 00:49:52.105828 | orchestrator | Saturday 17 January 2026 00:49:41 +0000 (0:00:02.314) 0:02:14.317 ****** 2026-01-17 00:49:52.105833 | orchestrator | changed: [testbed-node-0] 2026-01-17 00:49:52.105838 | orchestrator | changed: [testbed-manager] 2026-01-17 00:49:52.105842 | orchestrator | changed: [testbed-node-1] 2026-01-17 00:49:52.105847 | orchestrator | changed: [testbed-node-2] 2026-01-17 00:49:52.105852 | orchestrator | changed: [testbed-node-3] 2026-01-17 00:49:52.105856 | orchestrator | changed: [testbed-node-4] 2026-01-17 00:49:52.105860 | orchestrator | changed: [testbed-node-5] 2026-01-17 00:49:52.105865 | orchestrator | 2026-01-17 00:49:52.105869 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-17 00:49:52.105875 | orchestrator | testbed-manager : ok=22  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-01-17 00:49:52.105881 | orchestrator | testbed-node-0 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-01-17 00:49:52.105892 | orchestrator | testbed-node-1 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-01-17 00:49:52.105897 | orchestrator | testbed-node-2 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-01-17 00:49:52.105902 | orchestrator | testbed-node-3 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-01-17 00:49:52.105907 | orchestrator | testbed-node-4 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-01-17 00:49:52.105912 | orchestrator | testbed-node-5 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-01-17 00:49:52.105917 | orchestrator | 2026-01-17 00:49:52.105922 | orchestrator | 2026-01-17 00:49:52.105927 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-17 00:49:52.105932 | orchestrator | Saturday 17 January 2026 00:49:50 +0000 (0:00:09.720) 0:02:24.037 ****** 2026-01-17 00:49:52.105937 | orchestrator | =============================================================================== 2026-01-17 00:49:52.105942 | orchestrator | common : Restart kolla-toolbox container ------------------------------- 37.61s 2026-01-17 00:49:52.105948 | orchestrator | common : Restart fluentd container ------------------------------------- 35.54s 2026-01-17 00:49:52.105954 | orchestrator | common : Restart cron container ----------------------------------------- 9.72s 2026-01-17 00:49:52.105959 | orchestrator | common : Copying over config.json files for services -------------------- 7.47s 2026-01-17 00:49:52.105964 | orchestrator | service-cert-copy : common | Copying over extra CA certificates --------- 5.38s 2026-01-17 00:49:52.105977 | orchestrator | common : Copy rabbitmq-env.conf to kolla toolbox ------------------------ 4.20s 2026-01-17 00:49:52.105983 | orchestrator | common : Copying over cron logrotate config file ------------------------ 4.02s 2026-01-17 00:49:52.105988 | orchestrator | common : Ensuring config directories exist ------------------------------ 3.97s 2026-01-17 00:49:52.105993 | orchestrator | common : Ensure RabbitMQ Erlang cookie exists --------------------------- 3.83s 2026-01-17 00:49:52.105998 | orchestrator | common : Copying over fluentd.conf -------------------------------------- 3.83s 2026-01-17 00:49:52.106003 | orchestrator | common : Check common containers ---------------------------------------- 3.48s 2026-01-17 00:49:52.106008 | orchestrator | common : Ensuring config directories have correct owner and permission --- 3.07s 2026-01-17 00:49:52.106050 | orchestrator | common : Copy rabbitmq erl_inetrc to kolla toolbox ---------------------- 2.95s 2026-01-17 00:49:52.106058 | orchestrator | service-cert-copy : common | Copying over backend internal TLS key ------ 2.91s 2026-01-17 00:49:52.106063 | orchestrator | common : Initializing toolbox container using normal user --------------- 2.31s 2026-01-17 00:49:52.106068 | orchestrator | common : Find custom fluentd input config files ------------------------- 1.44s 2026-01-17 00:49:52.106073 | orchestrator | common : Creating log volume -------------------------------------------- 1.43s 2026-01-17 00:49:52.106079 | orchestrator | service-cert-copy : common | Copying over backend internal TLS certificate --- 1.31s 2026-01-17 00:49:52.106084 | orchestrator | common : Link kolla_logs volume to /var/log/kolla ----------------------- 1.25s 2026-01-17 00:49:52.106089 | orchestrator | common : include_tasks -------------------------------------------------- 1.20s 2026-01-17 00:49:52.106094 | orchestrator | 2026-01-17 00:49:52 | INFO  | Task 1a06fc1e-dccf-48c7-9307-96e1c4567bb9 is in state STARTED 2026-01-17 00:49:52.106100 | orchestrator | 2026-01-17 00:49:52 | INFO  | Wait 1 second(s) until the next check 2026-01-17 00:49:55.139215 | orchestrator | 2026-01-17 00:49:55 | INFO  | Task e8c6e5e5-e14a-4a9e-a291-774a61f042df is in state STARTED 2026-01-17 00:49:55.139883 | orchestrator | 2026-01-17 00:49:55 | INFO  | Task dc60e894-f485-4ac3-9bf0-85b31779dd81 is in state STARTED 2026-01-17 00:49:55.140606 | orchestrator | 2026-01-17 00:49:55 | INFO  | Task 7002a706-1e96-40e0-a78c-3e70f3715b86 is in state STARTED 2026-01-17 00:49:55.141622 | orchestrator | 2026-01-17 00:49:55 | INFO  | Task 497761d1-a38d-431c-b637-b33867e48908 is in state STARTED 2026-01-17 00:49:55.143111 | orchestrator | 2026-01-17 00:49:55 | INFO  | Task 1a06fc1e-dccf-48c7-9307-96e1c4567bb9 is in state STARTED 2026-01-17 00:49:55.144225 | orchestrator | 2026-01-17 00:49:55 | INFO  | Task 17d6a359-9051-4bcd-b756-f99e7fd42d4b is in state STARTED 2026-01-17 00:49:55.144293 | orchestrator | 2026-01-17 00:49:55 | INFO  | Wait 1 second(s) until the next check 2026-01-17 00:49:58.182788 | orchestrator | 2026-01-17 00:49:58 | INFO  | Task e8c6e5e5-e14a-4a9e-a291-774a61f042df is in state STARTED 2026-01-17 00:49:58.182886 | orchestrator | 2026-01-17 00:49:58 | INFO  | Task dc60e894-f485-4ac3-9bf0-85b31779dd81 is in state STARTED 2026-01-17 00:49:58.184384 | orchestrator | 2026-01-17 00:49:58 | INFO  | Task 7002a706-1e96-40e0-a78c-3e70f3715b86 is in state STARTED 2026-01-17 00:49:58.186532 | orchestrator | 2026-01-17 00:49:58 | INFO  | Task 497761d1-a38d-431c-b637-b33867e48908 is in state STARTED 2026-01-17 00:49:58.186606 | orchestrator | 2026-01-17 00:49:58 | INFO  | Task 1a06fc1e-dccf-48c7-9307-96e1c4567bb9 is in state STARTED 2026-01-17 00:49:58.186620 | orchestrator | 2026-01-17 00:49:58 | INFO  | Task 17d6a359-9051-4bcd-b756-f99e7fd42d4b is in state STARTED 2026-01-17 00:49:58.186757 | orchestrator | 2026-01-17 00:49:58 | INFO  | Wait 1 second(s) until the next check 2026-01-17 00:50:01.230254 | orchestrator | 2026-01-17 00:50:01 | INFO  | Task e8c6e5e5-e14a-4a9e-a291-774a61f042df is in state STARTED 2026-01-17 00:50:01.230918 | orchestrator | 2026-01-17 00:50:01 | INFO  | Task dc60e894-f485-4ac3-9bf0-85b31779dd81 is in state STARTED 2026-01-17 00:50:01.232099 | orchestrator | 2026-01-17 00:50:01 | INFO  | Task 7002a706-1e96-40e0-a78c-3e70f3715b86 is in state STARTED 2026-01-17 00:50:01.233113 | orchestrator | 2026-01-17 00:50:01 | INFO  | Task 497761d1-a38d-431c-b637-b33867e48908 is in state STARTED 2026-01-17 00:50:01.234080 | orchestrator | 2026-01-17 00:50:01 | INFO  | Task 1a06fc1e-dccf-48c7-9307-96e1c4567bb9 is in state STARTED 2026-01-17 00:50:01.234897 | orchestrator | 2026-01-17 00:50:01 | INFO  | Task 17d6a359-9051-4bcd-b756-f99e7fd42d4b is in state STARTED 2026-01-17 00:50:01.235100 | orchestrator | 2026-01-17 00:50:01 | INFO  | Wait 1 second(s) until the next check 2026-01-17 00:50:04.275169 | orchestrator | 2026-01-17 00:50:04 | INFO  | Task e8c6e5e5-e14a-4a9e-a291-774a61f042df is in state STARTED 2026-01-17 00:50:04.275336 | orchestrator | 2026-01-17 00:50:04 | INFO  | Task dc60e894-f485-4ac3-9bf0-85b31779dd81 is in state STARTED 2026-01-17 00:50:04.275366 | orchestrator | 2026-01-17 00:50:04 | INFO  | Task 7002a706-1e96-40e0-a78c-3e70f3715b86 is in state STARTED 2026-01-17 00:50:04.276649 | orchestrator | 2026-01-17 00:50:04 | INFO  | Task 497761d1-a38d-431c-b637-b33867e48908 is in state STARTED 2026-01-17 00:50:04.277697 | orchestrator | 2026-01-17 00:50:04 | INFO  | Task 1a06fc1e-dccf-48c7-9307-96e1c4567bb9 is in state STARTED 2026-01-17 00:50:04.279359 | orchestrator | 2026-01-17 00:50:04 | INFO  | Task 17d6a359-9051-4bcd-b756-f99e7fd42d4b is in state STARTED 2026-01-17 00:50:04.279405 | orchestrator | 2026-01-17 00:50:04 | INFO  | Wait 1 second(s) until the next check 2026-01-17 00:50:07.314556 | orchestrator | 2026-01-17 00:50:07 | INFO  | Task e8c6e5e5-e14a-4a9e-a291-774a61f042df is in state STARTED 2026-01-17 00:50:07.314780 | orchestrator | 2026-01-17 00:50:07 | INFO  | Task dc60e894-f485-4ac3-9bf0-85b31779dd81 is in state STARTED 2026-01-17 00:50:07.315661 | orchestrator | 2026-01-17 00:50:07 | INFO  | Task 7002a706-1e96-40e0-a78c-3e70f3715b86 is in state STARTED 2026-01-17 00:50:07.316315 | orchestrator | 2026-01-17 00:50:07 | INFO  | Task 497761d1-a38d-431c-b637-b33867e48908 is in state STARTED 2026-01-17 00:50:07.317029 | orchestrator | 2026-01-17 00:50:07 | INFO  | Task 1a06fc1e-dccf-48c7-9307-96e1c4567bb9 is in state STARTED 2026-01-17 00:50:07.317697 | orchestrator | 2026-01-17 00:50:07 | INFO  | Task 17d6a359-9051-4bcd-b756-f99e7fd42d4b is in state STARTED 2026-01-17 00:50:07.317719 | orchestrator | 2026-01-17 00:50:07 | INFO  | Wait 1 second(s) until the next check 2026-01-17 00:50:10.510453 | orchestrator | 2026-01-17 00:50:10 | INFO  | Task e8c6e5e5-e14a-4a9e-a291-774a61f042df is in state STARTED 2026-01-17 00:50:10.510568 | orchestrator | 2026-01-17 00:50:10 | INFO  | Task dc60e894-f485-4ac3-9bf0-85b31779dd81 is in state STARTED 2026-01-17 00:50:10.510580 | orchestrator | 2026-01-17 00:50:10 | INFO  | Task 7002a706-1e96-40e0-a78c-3e70f3715b86 is in state STARTED 2026-01-17 00:50:10.510587 | orchestrator | 2026-01-17 00:50:10 | INFO  | Task 497761d1-a38d-431c-b637-b33867e48908 is in state STARTED 2026-01-17 00:50:10.510594 | orchestrator | 2026-01-17 00:50:10 | INFO  | Task 1a06fc1e-dccf-48c7-9307-96e1c4567bb9 is in state STARTED 2026-01-17 00:50:10.510600 | orchestrator | 2026-01-17 00:50:10 | INFO  | Task 17d6a359-9051-4bcd-b756-f99e7fd42d4b is in state STARTED 2026-01-17 00:50:10.510608 | orchestrator | 2026-01-17 00:50:10 | INFO  | Wait 1 second(s) until the next check 2026-01-17 00:50:13.525471 | orchestrator | 2026-01-17 00:50:13 | INFO  | Task e8c6e5e5-e14a-4a9e-a291-774a61f042df is in state STARTED 2026-01-17 00:50:13.525549 | orchestrator | 2026-01-17 00:50:13 | INFO  | Task dc60e894-f485-4ac3-9bf0-85b31779dd81 is in state STARTED 2026-01-17 00:50:13.525560 | orchestrator | 2026-01-17 00:50:13 | INFO  | Task 7002a706-1e96-40e0-a78c-3e70f3715b86 is in state STARTED 2026-01-17 00:50:13.525568 | orchestrator | 2026-01-17 00:50:13 | INFO  | Task 497761d1-a38d-431c-b637-b33867e48908 is in state STARTED 2026-01-17 00:50:13.525575 | orchestrator | 2026-01-17 00:50:13 | INFO  | Task 1a06fc1e-dccf-48c7-9307-96e1c4567bb9 is in state STARTED 2026-01-17 00:50:13.525582 | orchestrator | 2026-01-17 00:50:13 | INFO  | Task 17d6a359-9051-4bcd-b756-f99e7fd42d4b is in state STARTED 2026-01-17 00:50:13.525589 | orchestrator | 2026-01-17 00:50:13 | INFO  | Wait 1 second(s) until the next check 2026-01-17 00:50:16.524260 | orchestrator | 2026-01-17 00:50:16 | INFO  | Task e8c6e5e5-e14a-4a9e-a291-774a61f042df is in state STARTED 2026-01-17 00:50:16.525081 | orchestrator | 2026-01-17 00:50:16 | INFO  | Task dc60e894-f485-4ac3-9bf0-85b31779dd81 is in state STARTED 2026-01-17 00:50:16.525808 | orchestrator | 2026-01-17 00:50:16 | INFO  | Task 7002a706-1e96-40e0-a78c-3e70f3715b86 is in state STARTED 2026-01-17 00:50:16.526576 | orchestrator | 2026-01-17 00:50:16 | INFO  | Task 497761d1-a38d-431c-b637-b33867e48908 is in state SUCCESS 2026-01-17 00:50:16.527527 | orchestrator | 2026-01-17 00:50:16 | INFO  | Task 1a06fc1e-dccf-48c7-9307-96e1c4567bb9 is in state STARTED 2026-01-17 00:50:16.528169 | orchestrator | 2026-01-17 00:50:16 | INFO  | Task 17d6a359-9051-4bcd-b756-f99e7fd42d4b is in state STARTED 2026-01-17 00:50:16.529191 | orchestrator | 2026-01-17 00:50:16 | INFO  | Task 154fae25-cd35-4438-bf9a-016d609d7368 is in state STARTED 2026-01-17 00:50:16.529217 | orchestrator | 2026-01-17 00:50:16 | INFO  | Wait 1 second(s) until the next check 2026-01-17 00:50:19.566204 | orchestrator | 2026-01-17 00:50:19 | INFO  | Task e8c6e5e5-e14a-4a9e-a291-774a61f042df is in state STARTED 2026-01-17 00:50:19.566277 | orchestrator | 2026-01-17 00:50:19 | INFO  | Task dc60e894-f485-4ac3-9bf0-85b31779dd81 is in state STARTED 2026-01-17 00:50:19.566286 | orchestrator | 2026-01-17 00:50:19 | INFO  | Task 7002a706-1e96-40e0-a78c-3e70f3715b86 is in state STARTED 2026-01-17 00:50:19.567323 | orchestrator | 2026-01-17 00:50:19 | INFO  | Task 1a06fc1e-dccf-48c7-9307-96e1c4567bb9 is in state STARTED 2026-01-17 00:50:19.569337 | orchestrator | 2026-01-17 00:50:19 | INFO  | Task 17d6a359-9051-4bcd-b756-f99e7fd42d4b is in state STARTED 2026-01-17 00:50:19.570265 | orchestrator | 2026-01-17 00:50:19 | INFO  | Task 154fae25-cd35-4438-bf9a-016d609d7368 is in state STARTED 2026-01-17 00:50:19.570299 | orchestrator | 2026-01-17 00:50:19 | INFO  | Wait 1 second(s) until the next check 2026-01-17 00:50:22.607282 | orchestrator | 2026-01-17 00:50:22 | INFO  | Task e8c6e5e5-e14a-4a9e-a291-774a61f042df is in state STARTED 2026-01-17 00:50:22.607341 | orchestrator | 2026-01-17 00:50:22 | INFO  | Task dc60e894-f485-4ac3-9bf0-85b31779dd81 is in state STARTED 2026-01-17 00:50:22.607350 | orchestrator | 2026-01-17 00:50:22 | INFO  | Task 7002a706-1e96-40e0-a78c-3e70f3715b86 is in state STARTED 2026-01-17 00:50:22.607356 | orchestrator | 2026-01-17 00:50:22 | INFO  | Task 1a06fc1e-dccf-48c7-9307-96e1c4567bb9 is in state STARTED 2026-01-17 00:50:22.607362 | orchestrator | 2026-01-17 00:50:22 | INFO  | Task 17d6a359-9051-4bcd-b756-f99e7fd42d4b is in state STARTED 2026-01-17 00:50:22.607367 | orchestrator | 2026-01-17 00:50:22 | INFO  | Task 154fae25-cd35-4438-bf9a-016d609d7368 is in state STARTED 2026-01-17 00:50:22.607373 | orchestrator | 2026-01-17 00:50:22 | INFO  | Wait 1 second(s) until the next check 2026-01-17 00:50:25.656345 | orchestrator | 2026-01-17 00:50:25 | INFO  | Task e8c6e5e5-e14a-4a9e-a291-774a61f042df is in state STARTED 2026-01-17 00:50:25.687693 | orchestrator | 2026-01-17 00:50:25 | INFO  | Task dc60e894-f485-4ac3-9bf0-85b31779dd81 is in state STARTED 2026-01-17 00:50:25.687747 | orchestrator | 2026-01-17 00:50:25 | INFO  | Task 7002a706-1e96-40e0-a78c-3e70f3715b86 is in state STARTED 2026-01-17 00:50:25.687755 | orchestrator | 2026-01-17 00:50:25 | INFO  | Task 1a06fc1e-dccf-48c7-9307-96e1c4567bb9 is in state STARTED 2026-01-17 00:50:25.687762 | orchestrator | 2026-01-17 00:50:25 | INFO  | Task 17d6a359-9051-4bcd-b756-f99e7fd42d4b is in state STARTED 2026-01-17 00:50:25.687768 | orchestrator | 2026-01-17 00:50:25 | INFO  | Task 154fae25-cd35-4438-bf9a-016d609d7368 is in state STARTED 2026-01-17 00:50:25.687775 | orchestrator | 2026-01-17 00:50:25 | INFO  | Wait 1 second(s) until the next check 2026-01-17 00:50:28.758903 | orchestrator | 2026-01-17 00:50:28 | INFO  | Task e8c6e5e5-e14a-4a9e-a291-774a61f042df is in state STARTED 2026-01-17 00:50:28.760026 | orchestrator | 2026-01-17 00:50:28 | INFO  | Task dc60e894-f485-4ac3-9bf0-85b31779dd81 is in state STARTED 2026-01-17 00:50:28.762730 | orchestrator | 2026-01-17 00:50:28 | INFO  | Task 7002a706-1e96-40e0-a78c-3e70f3715b86 is in state STARTED 2026-01-17 00:50:28.763166 | orchestrator | 2026-01-17 00:50:28 | INFO  | Task 1a06fc1e-dccf-48c7-9307-96e1c4567bb9 is in state STARTED 2026-01-17 00:50:28.764631 | orchestrator | 2026-01-17 00:50:28 | INFO  | Task 17d6a359-9051-4bcd-b756-f99e7fd42d4b is in state STARTED 2026-01-17 00:50:28.765649 | orchestrator | 2026-01-17 00:50:28 | INFO  | Task 154fae25-cd35-4438-bf9a-016d609d7368 is in state STARTED 2026-01-17 00:50:28.765995 | orchestrator | 2026-01-17 00:50:28 | INFO  | Wait 1 second(s) until the next check 2026-01-17 00:50:31.818392 | orchestrator | 2026-01-17 00:50:31.819323 | orchestrator | 2026-01-17 00:50:31.819352 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-17 00:50:31.819358 | orchestrator | 2026-01-17 00:50:31.819362 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-01-17 00:50:31.819367 | orchestrator | Saturday 17 January 2026 00:49:57 +0000 (0:00:00.655) 0:00:00.655 ****** 2026-01-17 00:50:31.819371 | orchestrator | ok: [testbed-node-0] 2026-01-17 00:50:31.819376 | orchestrator | ok: [testbed-node-1] 2026-01-17 00:50:31.819380 | orchestrator | ok: [testbed-node-2] 2026-01-17 00:50:31.819384 | orchestrator | 2026-01-17 00:50:31.819388 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-17 00:50:31.819392 | orchestrator | Saturday 17 January 2026 00:49:58 +0000 (0:00:00.738) 0:00:01.393 ****** 2026-01-17 00:50:31.819396 | orchestrator | ok: [testbed-node-0] => (item=enable_memcached_True) 2026-01-17 00:50:31.819401 | orchestrator | ok: [testbed-node-1] => (item=enable_memcached_True) 2026-01-17 00:50:31.819405 | orchestrator | ok: [testbed-node-2] => (item=enable_memcached_True) 2026-01-17 00:50:31.819408 | orchestrator | 2026-01-17 00:50:31.819432 | orchestrator | PLAY [Apply role memcached] **************************************************** 2026-01-17 00:50:31.819440 | orchestrator | 2026-01-17 00:50:31.819444 | orchestrator | TASK [memcached : include_tasks] *********************************************** 2026-01-17 00:50:31.819448 | orchestrator | Saturday 17 January 2026 00:49:59 +0000 (0:00:01.098) 0:00:02.492 ****** 2026-01-17 00:50:31.819453 | orchestrator | included: /ansible/roles/memcached/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-17 00:50:31.819457 | orchestrator | 2026-01-17 00:50:31.819461 | orchestrator | TASK [memcached : Ensuring config directories exist] *************************** 2026-01-17 00:50:31.819465 | orchestrator | Saturday 17 January 2026 00:50:00 +0000 (0:00:00.948) 0:00:03.441 ****** 2026-01-17 00:50:31.819469 | orchestrator | changed: [testbed-node-0] => (item=memcached) 2026-01-17 00:50:31.819473 | orchestrator | changed: [testbed-node-1] => (item=memcached) 2026-01-17 00:50:31.819496 | orchestrator | changed: [testbed-node-2] => (item=memcached) 2026-01-17 00:50:31.819502 | orchestrator | 2026-01-17 00:50:31.819508 | orchestrator | TASK [memcached : Copying over config.json files for services] ***************** 2026-01-17 00:50:31.819514 | orchestrator | Saturday 17 January 2026 00:50:01 +0000 (0:00:01.035) 0:00:04.476 ****** 2026-01-17 00:50:31.819520 | orchestrator | changed: [testbed-node-1] => (item=memcached) 2026-01-17 00:50:31.819526 | orchestrator | changed: [testbed-node-2] => (item=memcached) 2026-01-17 00:50:31.819531 | orchestrator | changed: [testbed-node-0] => (item=memcached) 2026-01-17 00:50:31.819537 | orchestrator | 2026-01-17 00:50:31.819543 | orchestrator | TASK [memcached : Check memcached container] *********************************** 2026-01-17 00:50:31.819548 | orchestrator | Saturday 17 January 2026 00:50:03 +0000 (0:00:02.752) 0:00:07.229 ****** 2026-01-17 00:50:31.819553 | orchestrator | changed: [testbed-node-0] 2026-01-17 00:50:31.819559 | orchestrator | changed: [testbed-node-2] 2026-01-17 00:50:31.819563 | orchestrator | changed: [testbed-node-1] 2026-01-17 00:50:31.819568 | orchestrator | 2026-01-17 00:50:31.819574 | orchestrator | RUNNING HANDLER [memcached : Restart memcached container] ********************** 2026-01-17 00:50:31.819579 | orchestrator | Saturday 17 January 2026 00:50:06 +0000 (0:00:02.227) 0:00:09.457 ****** 2026-01-17 00:50:31.819585 | orchestrator | changed: [testbed-node-2] 2026-01-17 00:50:31.819590 | orchestrator | changed: [testbed-node-0] 2026-01-17 00:50:31.819595 | orchestrator | changed: [testbed-node-1] 2026-01-17 00:50:31.819600 | orchestrator | 2026-01-17 00:50:31.819606 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-17 00:50:31.819611 | orchestrator | testbed-node-0 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-17 00:50:31.819619 | orchestrator | testbed-node-1 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-17 00:50:31.819625 | orchestrator | testbed-node-2 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-17 00:50:31.819630 | orchestrator | 2026-01-17 00:50:31.819635 | orchestrator | 2026-01-17 00:50:31.819641 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-17 00:50:31.819648 | orchestrator | Saturday 17 January 2026 00:50:14 +0000 (0:00:08.192) 0:00:17.649 ****** 2026-01-17 00:50:31.819653 | orchestrator | =============================================================================== 2026-01-17 00:50:31.819658 | orchestrator | memcached : Restart memcached container --------------------------------- 8.19s 2026-01-17 00:50:31.819664 | orchestrator | memcached : Copying over config.json files for services ----------------- 2.75s 2026-01-17 00:50:31.819669 | orchestrator | memcached : Check memcached container ----------------------------------- 2.23s 2026-01-17 00:50:31.819674 | orchestrator | Group hosts based on enabled services ----------------------------------- 1.10s 2026-01-17 00:50:31.819680 | orchestrator | memcached : Ensuring config directories exist --------------------------- 1.04s 2026-01-17 00:50:31.819685 | orchestrator | memcached : include_tasks ----------------------------------------------- 0.95s 2026-01-17 00:50:31.819690 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.74s 2026-01-17 00:50:31.819696 | orchestrator | 2026-01-17 00:50:31.819702 | orchestrator | 2026-01-17 00:50:31.819708 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-17 00:50:31.819714 | orchestrator | 2026-01-17 00:50:31.819719 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-01-17 00:50:31.819740 | orchestrator | Saturday 17 January 2026 00:49:58 +0000 (0:00:00.637) 0:00:00.637 ****** 2026-01-17 00:50:31.819745 | orchestrator | ok: [testbed-node-0] 2026-01-17 00:50:31.819751 | orchestrator | ok: [testbed-node-1] 2026-01-17 00:50:31.819756 | orchestrator | ok: [testbed-node-2] 2026-01-17 00:50:31.819762 | orchestrator | 2026-01-17 00:50:31.819768 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-17 00:50:31.819801 | orchestrator | Saturday 17 January 2026 00:49:59 +0000 (0:00:00.935) 0:00:01.572 ****** 2026-01-17 00:50:31.819807 | orchestrator | ok: [testbed-node-0] => (item=enable_redis_True) 2026-01-17 00:50:31.819811 | orchestrator | ok: [testbed-node-1] => (item=enable_redis_True) 2026-01-17 00:50:31.819815 | orchestrator | ok: [testbed-node-2] => (item=enable_redis_True) 2026-01-17 00:50:31.819818 | orchestrator | 2026-01-17 00:50:31.819822 | orchestrator | PLAY [Apply role redis] ******************************************************** 2026-01-17 00:50:31.819826 | orchestrator | 2026-01-17 00:50:31.819830 | orchestrator | TASK [redis : include_tasks] *************************************************** 2026-01-17 00:50:31.819834 | orchestrator | Saturday 17 January 2026 00:50:00 +0000 (0:00:00.939) 0:00:02.511 ****** 2026-01-17 00:50:31.819838 | orchestrator | included: /ansible/roles/redis/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-17 00:50:31.819841 | orchestrator | 2026-01-17 00:50:31.819845 | orchestrator | TASK [redis : Ensuring config directories exist] ******************************* 2026-01-17 00:50:31.819864 | orchestrator | Saturday 17 January 2026 00:50:01 +0000 (0:00:00.760) 0:00:03.272 ****** 2026-01-17 00:50:31.819870 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-01-17 00:50:31.819878 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-01-17 00:50:31.819883 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-01-17 00:50:31.819891 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-01-17 00:50:31.819895 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-01-17 00:50:31.819913 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-01-17 00:50:31.819917 | orchestrator | 2026-01-17 00:50:31.819921 | orchestrator | TASK [redis : Copying over default config.json files] ************************** 2026-01-17 00:50:31.819925 | orchestrator | Saturday 17 January 2026 00:50:03 +0000 (0:00:01.992) 0:00:05.265 ****** 2026-01-17 00:50:31.819929 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-01-17 00:50:31.819933 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-01-17 00:50:31.819937 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-01-17 00:50:31.819944 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-01-17 00:50:31.819948 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-01-17 00:50:31.819959 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-01-17 00:50:31.819963 | orchestrator | 2026-01-17 00:50:31.819967 | orchestrator | TASK [redis : Copying over redis config files] ********************************* 2026-01-17 00:50:31.819971 | orchestrator | Saturday 17 January 2026 00:50:06 +0000 (0:00:03.169) 0:00:08.435 ****** 2026-01-17 00:50:31.819975 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-01-17 00:50:31.819979 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-01-17 00:50:31.819982 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-01-17 00:50:31.819989 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-01-17 00:50:31.819993 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-01-17 00:50:31.820000 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-01-17 00:50:31.820004 | orchestrator | 2026-01-17 00:50:31.820011 | orchestrator | TASK [redis : Check redis containers] ****************************************** 2026-01-17 00:50:31.820015 | orchestrator | Saturday 17 January 2026 00:50:10 +0000 (0:00:03.726) 0:00:12.163 ****** 2026-01-17 00:50:31.820019 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-01-17 00:50:31.820023 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-01-17 00:50:31.820027 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-01-17 00:50:31.820031 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-01-17 00:50:31.820037 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-01-17 00:50:31.820045 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-01-17 00:50:31.820049 | orchestrator | 2026-01-17 00:50:31.820052 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2026-01-17 00:50:31.820056 | orchestrator | Saturday 17 January 2026 00:50:12 +0000 (0:00:02.504) 0:00:14.668 ****** 2026-01-17 00:50:31.820060 | orchestrator | 2026-01-17 00:50:31.820064 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2026-01-17 00:50:31.820070 | orchestrator | Saturday 17 January 2026 00:50:13 +0000 (0:00:00.219) 0:00:14.887 ****** 2026-01-17 00:50:31.820074 | orchestrator | 2026-01-17 00:50:31.820078 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2026-01-17 00:50:31.820082 | orchestrator | Saturday 17 January 2026 00:50:13 +0000 (0:00:00.342) 0:00:15.230 ****** 2026-01-17 00:50:31.820086 | orchestrator | 2026-01-17 00:50:31.820089 | orchestrator | RUNNING HANDLER [redis : Restart redis container] ****************************** 2026-01-17 00:50:31.820093 | orchestrator | Saturday 17 January 2026 00:50:13 +0000 (0:00:00.340) 0:00:15.570 ****** 2026-01-17 00:50:31.820097 | orchestrator | changed: [testbed-node-0] 2026-01-17 00:50:31.820101 | orchestrator | changed: [testbed-node-2] 2026-01-17 00:50:31.820104 | orchestrator | changed: [testbed-node-1] 2026-01-17 00:50:31.820108 | orchestrator | 2026-01-17 00:50:31.820112 | orchestrator | RUNNING HANDLER [redis : Restart redis-sentinel container] ********************* 2026-01-17 00:50:31.820116 | orchestrator | Saturday 17 January 2026 00:50:22 +0000 (0:00:08.471) 0:00:24.041 ****** 2026-01-17 00:50:31.820119 | orchestrator | changed: [testbed-node-2] 2026-01-17 00:50:31.820123 | orchestrator | changed: [testbed-node-1] 2026-01-17 00:50:31.820127 | orchestrator | changed: [testbed-node-0] 2026-01-17 00:50:31.820131 | orchestrator | 2026-01-17 00:50:31.820134 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-17 00:50:31.820138 | orchestrator | testbed-node-0 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-17 00:50:31.820142 | orchestrator | testbed-node-1 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-17 00:50:31.820146 | orchestrator | testbed-node-2 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-17 00:50:31.820150 | orchestrator | 2026-01-17 00:50:31.820154 | orchestrator | 2026-01-17 00:50:31.820157 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-17 00:50:31.820161 | orchestrator | Saturday 17 January 2026 00:50:30 +0000 (0:00:08.569) 0:00:32.610 ****** 2026-01-17 00:50:31.820165 | orchestrator | =============================================================================== 2026-01-17 00:50:31.820169 | orchestrator | redis : Restart redis-sentinel container -------------------------------- 8.57s 2026-01-17 00:50:31.820172 | orchestrator | redis : Restart redis container ----------------------------------------- 8.47s 2026-01-17 00:50:31.820176 | orchestrator | redis : Copying over redis config files --------------------------------- 3.73s 2026-01-17 00:50:31.820182 | orchestrator | redis : Copying over default config.json files -------------------------- 3.17s 2026-01-17 00:50:31.820186 | orchestrator | redis : Check redis containers ------------------------------------------ 2.50s 2026-01-17 00:50:31.820190 | orchestrator | redis : Ensuring config directories exist ------------------------------- 1.99s 2026-01-17 00:50:31.820194 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.94s 2026-01-17 00:50:31.820197 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.94s 2026-01-17 00:50:31.820201 | orchestrator | redis : Flush handlers -------------------------------------------------- 0.90s 2026-01-17 00:50:31.820205 | orchestrator | redis : include_tasks --------------------------------------------------- 0.76s 2026-01-17 00:50:31.820209 | orchestrator | 2026-01-17 00:50:31 | INFO  | Task e8c6e5e5-e14a-4a9e-a291-774a61f042df is in state STARTED 2026-01-17 00:50:31.820216 | orchestrator | 2026-01-17 00:50:31 | INFO  | Task dc60e894-f485-4ac3-9bf0-85b31779dd81 is in state STARTED 2026-01-17 00:50:31.820220 | orchestrator | 2026-01-17 00:50:31 | INFO  | Task 7002a706-1e96-40e0-a78c-3e70f3715b86 is in state STARTED 2026-01-17 00:50:31.820224 | orchestrator | 2026-01-17 00:50:31 | INFO  | Task 1a06fc1e-dccf-48c7-9307-96e1c4567bb9 is in state STARTED 2026-01-17 00:50:31.820227 | orchestrator | 2026-01-17 00:50:31 | INFO  | Task 17d6a359-9051-4bcd-b756-f99e7fd42d4b is in state SUCCESS 2026-01-17 00:50:31.820231 | orchestrator | 2026-01-17 00:50:31 | INFO  | Task 154fae25-cd35-4438-bf9a-016d609d7368 is in state STARTED 2026-01-17 00:50:31.820235 | orchestrator | 2026-01-17 00:50:31 | INFO  | Wait 1 second(s) until the next check 2026-01-17 00:50:34.855299 | orchestrator | 2026-01-17 00:50:34 | INFO  | Task e8c6e5e5-e14a-4a9e-a291-774a61f042df is in state STARTED 2026-01-17 00:50:34.855863 | orchestrator | 2026-01-17 00:50:34 | INFO  | Task dc60e894-f485-4ac3-9bf0-85b31779dd81 is in state STARTED 2026-01-17 00:50:34.856652 | orchestrator | 2026-01-17 00:50:34 | INFO  | Task 7002a706-1e96-40e0-a78c-3e70f3715b86 is in state STARTED 2026-01-17 00:50:34.857686 | orchestrator | 2026-01-17 00:50:34 | INFO  | Task 1a06fc1e-dccf-48c7-9307-96e1c4567bb9 is in state STARTED 2026-01-17 00:50:34.858889 | orchestrator | 2026-01-17 00:50:34 | INFO  | Task 154fae25-cd35-4438-bf9a-016d609d7368 is in state STARTED 2026-01-17 00:50:34.858956 | orchestrator | 2026-01-17 00:50:34 | INFO  | Wait 1 second(s) until the next check 2026-01-17 00:50:37.930555 | orchestrator | 2026-01-17 00:50:37 | INFO  | Task e8c6e5e5-e14a-4a9e-a291-774a61f042df is in state STARTED 2026-01-17 00:50:37.931624 | orchestrator | 2026-01-17 00:50:37 | INFO  | Task dc60e894-f485-4ac3-9bf0-85b31779dd81 is in state STARTED 2026-01-17 00:50:37.935164 | orchestrator | 2026-01-17 00:50:37 | INFO  | Task 7002a706-1e96-40e0-a78c-3e70f3715b86 is in state STARTED 2026-01-17 00:50:37.939793 | orchestrator | 2026-01-17 00:50:37 | INFO  | Task 1a06fc1e-dccf-48c7-9307-96e1c4567bb9 is in state STARTED 2026-01-17 00:50:37.942001 | orchestrator | 2026-01-17 00:50:37 | INFO  | Task 154fae25-cd35-4438-bf9a-016d609d7368 is in state STARTED 2026-01-17 00:50:37.942076 | orchestrator | 2026-01-17 00:50:37 | INFO  | Wait 1 second(s) until the next check 2026-01-17 00:50:40.987817 | orchestrator | 2026-01-17 00:50:40 | INFO  | Task e8c6e5e5-e14a-4a9e-a291-774a61f042df is in state STARTED 2026-01-17 00:50:40.988466 | orchestrator | 2026-01-17 00:50:40 | INFO  | Task dc60e894-f485-4ac3-9bf0-85b31779dd81 is in state STARTED 2026-01-17 00:50:40.999010 | orchestrator | 2026-01-17 00:50:40 | INFO  | Task 7002a706-1e96-40e0-a78c-3e70f3715b86 is in state STARTED 2026-01-17 00:50:41.003636 | orchestrator | 2026-01-17 00:50:41 | INFO  | Task 1a06fc1e-dccf-48c7-9307-96e1c4567bb9 is in state STARTED 2026-01-17 00:50:41.003790 | orchestrator | 2026-01-17 00:50:41 | INFO  | Task 154fae25-cd35-4438-bf9a-016d609d7368 is in state STARTED 2026-01-17 00:50:41.003807 | orchestrator | 2026-01-17 00:50:41 | INFO  | Wait 1 second(s) until the next check 2026-01-17 00:50:44.116777 | orchestrator | 2026-01-17 00:50:44 | INFO  | Task e8c6e5e5-e14a-4a9e-a291-774a61f042df is in state STARTED 2026-01-17 00:50:44.117141 | orchestrator | 2026-01-17 00:50:44 | INFO  | Task dc60e894-f485-4ac3-9bf0-85b31779dd81 is in state STARTED 2026-01-17 00:50:44.118155 | orchestrator | 2026-01-17 00:50:44 | INFO  | Task 7002a706-1e96-40e0-a78c-3e70f3715b86 is in state STARTED 2026-01-17 00:50:44.118821 | orchestrator | 2026-01-17 00:50:44 | INFO  | Task 1a06fc1e-dccf-48c7-9307-96e1c4567bb9 is in state STARTED 2026-01-17 00:50:44.120336 | orchestrator | 2026-01-17 00:50:44 | INFO  | Task 154fae25-cd35-4438-bf9a-016d609d7368 is in state STARTED 2026-01-17 00:50:44.120374 | orchestrator | 2026-01-17 00:50:44 | INFO  | Wait 1 second(s) until the next check 2026-01-17 00:50:47.146241 | orchestrator | 2026-01-17 00:50:47 | INFO  | Task e8c6e5e5-e14a-4a9e-a291-774a61f042df is in state STARTED 2026-01-17 00:50:47.147726 | orchestrator | 2026-01-17 00:50:47 | INFO  | Task dc60e894-f485-4ac3-9bf0-85b31779dd81 is in state STARTED 2026-01-17 00:50:47.149697 | orchestrator | 2026-01-17 00:50:47 | INFO  | Task 7002a706-1e96-40e0-a78c-3e70f3715b86 is in state STARTED 2026-01-17 00:50:47.150658 | orchestrator | 2026-01-17 00:50:47 | INFO  | Task 1a06fc1e-dccf-48c7-9307-96e1c4567bb9 is in state STARTED 2026-01-17 00:50:47.151932 | orchestrator | 2026-01-17 00:50:47 | INFO  | Task 154fae25-cd35-4438-bf9a-016d609d7368 is in state STARTED 2026-01-17 00:50:47.151957 | orchestrator | 2026-01-17 00:50:47 | INFO  | Wait 1 second(s) until the next check 2026-01-17 00:50:50.266600 | orchestrator | 2026-01-17 00:50:50 | INFO  | Task e8c6e5e5-e14a-4a9e-a291-774a61f042df is in state STARTED 2026-01-17 00:50:50.267796 | orchestrator | 2026-01-17 00:50:50 | INFO  | Task dc60e894-f485-4ac3-9bf0-85b31779dd81 is in state STARTED 2026-01-17 00:50:50.269210 | orchestrator | 2026-01-17 00:50:50 | INFO  | Task 7002a706-1e96-40e0-a78c-3e70f3715b86 is in state STARTED 2026-01-17 00:50:50.270707 | orchestrator | 2026-01-17 00:50:50 | INFO  | Task 1a06fc1e-dccf-48c7-9307-96e1c4567bb9 is in state STARTED 2026-01-17 00:50:50.272105 | orchestrator | 2026-01-17 00:50:50 | INFO  | Task 154fae25-cd35-4438-bf9a-016d609d7368 is in state STARTED 2026-01-17 00:50:50.272194 | orchestrator | 2026-01-17 00:50:50 | INFO  | Wait 1 second(s) until the next check 2026-01-17 00:50:53.310826 | orchestrator | 2026-01-17 00:50:53 | INFO  | Task e8c6e5e5-e14a-4a9e-a291-774a61f042df is in state STARTED 2026-01-17 00:50:53.310873 | orchestrator | 2026-01-17 00:50:53 | INFO  | Task dc60e894-f485-4ac3-9bf0-85b31779dd81 is in state STARTED 2026-01-17 00:50:53.313092 | orchestrator | 2026-01-17 00:50:53 | INFO  | Task 7002a706-1e96-40e0-a78c-3e70f3715b86 is in state STARTED 2026-01-17 00:50:53.313741 | orchestrator | 2026-01-17 00:50:53 | INFO  | Task 1a06fc1e-dccf-48c7-9307-96e1c4567bb9 is in state STARTED 2026-01-17 00:50:53.315218 | orchestrator | 2026-01-17 00:50:53 | INFO  | Task 154fae25-cd35-4438-bf9a-016d609d7368 is in state STARTED 2026-01-17 00:50:53.315241 | orchestrator | 2026-01-17 00:50:53 | INFO  | Wait 1 second(s) until the next check 2026-01-17 00:50:56.371779 | orchestrator | 2026-01-17 00:50:56 | INFO  | Task e8c6e5e5-e14a-4a9e-a291-774a61f042df is in state STARTED 2026-01-17 00:50:56.373887 | orchestrator | 2026-01-17 00:50:56 | INFO  | Task dc60e894-f485-4ac3-9bf0-85b31779dd81 is in state STARTED 2026-01-17 00:50:56.374512 | orchestrator | 2026-01-17 00:50:56 | INFO  | Task 7002a706-1e96-40e0-a78c-3e70f3715b86 is in state STARTED 2026-01-17 00:50:56.376339 | orchestrator | 2026-01-17 00:50:56 | INFO  | Task 1a06fc1e-dccf-48c7-9307-96e1c4567bb9 is in state STARTED 2026-01-17 00:50:56.379042 | orchestrator | 2026-01-17 00:50:56 | INFO  | Task 154fae25-cd35-4438-bf9a-016d609d7368 is in state STARTED 2026-01-17 00:50:56.379095 | orchestrator | 2026-01-17 00:50:56 | INFO  | Wait 1 second(s) until the next check 2026-01-17 00:50:59.424347 | orchestrator | 2026-01-17 00:50:59 | INFO  | Task e8c6e5e5-e14a-4a9e-a291-774a61f042df is in state STARTED 2026-01-17 00:50:59.425040 | orchestrator | 2026-01-17 00:50:59 | INFO  | Task dc60e894-f485-4ac3-9bf0-85b31779dd81 is in state STARTED 2026-01-17 00:50:59.427464 | orchestrator | 2026-01-17 00:50:59 | INFO  | Task 7002a706-1e96-40e0-a78c-3e70f3715b86 is in state STARTED 2026-01-17 00:50:59.428509 | orchestrator | 2026-01-17 00:50:59 | INFO  | Task 1a06fc1e-dccf-48c7-9307-96e1c4567bb9 is in state STARTED 2026-01-17 00:50:59.429780 | orchestrator | 2026-01-17 00:50:59 | INFO  | Task 154fae25-cd35-4438-bf9a-016d609d7368 is in state STARTED 2026-01-17 00:50:59.429813 | orchestrator | 2026-01-17 00:50:59 | INFO  | Wait 1 second(s) until the next check 2026-01-17 00:51:02.462387 | orchestrator | 2026-01-17 00:51:02 | INFO  | Task e8c6e5e5-e14a-4a9e-a291-774a61f042df is in state STARTED 2026-01-17 00:51:02.464444 | orchestrator | 2026-01-17 00:51:02 | INFO  | Task dc60e894-f485-4ac3-9bf0-85b31779dd81 is in state STARTED 2026-01-17 00:51:02.466153 | orchestrator | 2026-01-17 00:51:02 | INFO  | Task 7002a706-1e96-40e0-a78c-3e70f3715b86 is in state STARTED 2026-01-17 00:51:02.467270 | orchestrator | 2026-01-17 00:51:02 | INFO  | Task 1a06fc1e-dccf-48c7-9307-96e1c4567bb9 is in state STARTED 2026-01-17 00:51:02.468572 | orchestrator | 2026-01-17 00:51:02 | INFO  | Task 154fae25-cd35-4438-bf9a-016d609d7368 is in state STARTED 2026-01-17 00:51:02.468604 | orchestrator | 2026-01-17 00:51:02 | INFO  | Wait 1 second(s) until the next check 2026-01-17 00:51:05.511862 | orchestrator | 2026-01-17 00:51:05 | INFO  | Task e8c6e5e5-e14a-4a9e-a291-774a61f042df is in state STARTED 2026-01-17 00:51:05.513568 | orchestrator | 2026-01-17 00:51:05 | INFO  | Task dc60e894-f485-4ac3-9bf0-85b31779dd81 is in state SUCCESS 2026-01-17 00:51:05.514770 | orchestrator | 2026-01-17 00:51:05.514812 | orchestrator | 2026-01-17 00:51:05.514820 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-17 00:51:05.514827 | orchestrator | 2026-01-17 00:51:05.514834 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-01-17 00:51:05.514841 | orchestrator | Saturday 17 January 2026 00:49:57 +0000 (0:00:00.386) 0:00:00.386 ****** 2026-01-17 00:51:05.514847 | orchestrator | ok: [testbed-node-0] 2026-01-17 00:51:05.514855 | orchestrator | ok: [testbed-node-1] 2026-01-17 00:51:05.514862 | orchestrator | ok: [testbed-node-2] 2026-01-17 00:51:05.514868 | orchestrator | ok: [testbed-node-3] 2026-01-17 00:51:05.514875 | orchestrator | ok: [testbed-node-4] 2026-01-17 00:51:05.514896 | orchestrator | ok: [testbed-node-5] 2026-01-17 00:51:05.514903 | orchestrator | 2026-01-17 00:51:05.514910 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-17 00:51:05.514916 | orchestrator | Saturday 17 January 2026 00:49:58 +0000 (0:00:01.326) 0:00:01.712 ****** 2026-01-17 00:51:05.514922 | orchestrator | ok: [testbed-node-0] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-01-17 00:51:05.514929 | orchestrator | ok: [testbed-node-1] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-01-17 00:51:05.514935 | orchestrator | ok: [testbed-node-2] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-01-17 00:51:05.514942 | orchestrator | ok: [testbed-node-3] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-01-17 00:51:05.514970 | orchestrator | ok: [testbed-node-4] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-01-17 00:51:05.514977 | orchestrator | ok: [testbed-node-5] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-01-17 00:51:05.514983 | orchestrator | 2026-01-17 00:51:05.514990 | orchestrator | PLAY [Apply role openvswitch] ************************************************** 2026-01-17 00:51:05.514996 | orchestrator | 2026-01-17 00:51:05.515002 | orchestrator | TASK [openvswitch : include_tasks] ********************************************* 2026-01-17 00:51:05.515008 | orchestrator | Saturday 17 January 2026 00:50:00 +0000 (0:00:01.482) 0:00:03.195 ****** 2026-01-17 00:51:05.515016 | orchestrator | included: /ansible/roles/openvswitch/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-01-17 00:51:05.515023 | orchestrator | 2026-01-17 00:51:05.515030 | orchestrator | TASK [module-load : Load modules] ********************************************** 2026-01-17 00:51:05.515036 | orchestrator | Saturday 17 January 2026 00:50:02 +0000 (0:00:02.268) 0:00:05.463 ****** 2026-01-17 00:51:05.515042 | orchestrator | changed: [testbed-node-0] => (item=openvswitch) 2026-01-17 00:51:05.515048 | orchestrator | changed: [testbed-node-1] => (item=openvswitch) 2026-01-17 00:51:05.515054 | orchestrator | changed: [testbed-node-2] => (item=openvswitch) 2026-01-17 00:51:05.515060 | orchestrator | changed: [testbed-node-3] => (item=openvswitch) 2026-01-17 00:51:05.515067 | orchestrator | changed: [testbed-node-4] => (item=openvswitch) 2026-01-17 00:51:05.515073 | orchestrator | changed: [testbed-node-5] => (item=openvswitch) 2026-01-17 00:51:05.515079 | orchestrator | 2026-01-17 00:51:05.515086 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2026-01-17 00:51:05.515092 | orchestrator | Saturday 17 January 2026 00:50:04 +0000 (0:00:02.128) 0:00:07.592 ****** 2026-01-17 00:51:05.515098 | orchestrator | changed: [testbed-node-2] => (item=openvswitch) 2026-01-17 00:51:05.515104 | orchestrator | changed: [testbed-node-0] => (item=openvswitch) 2026-01-17 00:51:05.515110 | orchestrator | changed: [testbed-node-1] => (item=openvswitch) 2026-01-17 00:51:05.515117 | orchestrator | changed: [testbed-node-3] => (item=openvswitch) 2026-01-17 00:51:05.515122 | orchestrator | changed: [testbed-node-4] => (item=openvswitch) 2026-01-17 00:51:05.515129 | orchestrator | changed: [testbed-node-5] => (item=openvswitch) 2026-01-17 00:51:05.515135 | orchestrator | 2026-01-17 00:51:05.515142 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2026-01-17 00:51:05.515148 | orchestrator | Saturday 17 January 2026 00:50:06 +0000 (0:00:02.143) 0:00:09.736 ****** 2026-01-17 00:51:05.515154 | orchestrator | skipping: [testbed-node-0] => (item=openvswitch)  2026-01-17 00:51:05.515161 | orchestrator | skipping: [testbed-node-0] 2026-01-17 00:51:05.515168 | orchestrator | skipping: [testbed-node-1] => (item=openvswitch)  2026-01-17 00:51:05.515174 | orchestrator | skipping: [testbed-node-1] 2026-01-17 00:51:05.515180 | orchestrator | skipping: [testbed-node-2] => (item=openvswitch)  2026-01-17 00:51:05.515187 | orchestrator | skipping: [testbed-node-2] 2026-01-17 00:51:05.515193 | orchestrator | skipping: [testbed-node-3] => (item=openvswitch)  2026-01-17 00:51:05.515199 | orchestrator | skipping: [testbed-node-3] 2026-01-17 00:51:05.515206 | orchestrator | skipping: [testbed-node-4] => (item=openvswitch)  2026-01-17 00:51:05.515212 | orchestrator | skipping: [testbed-node-4] 2026-01-17 00:51:05.515218 | orchestrator | skipping: [testbed-node-5] => (item=openvswitch)  2026-01-17 00:51:05.515224 | orchestrator | skipping: [testbed-node-5] 2026-01-17 00:51:05.515230 | orchestrator | 2026-01-17 00:51:05.515237 | orchestrator | TASK [openvswitch : Create /run/openvswitch directory on host] ***************** 2026-01-17 00:51:05.515243 | orchestrator | Saturday 17 January 2026 00:50:08 +0000 (0:00:02.356) 0:00:12.092 ****** 2026-01-17 00:51:05.515249 | orchestrator | skipping: [testbed-node-0] 2026-01-17 00:51:05.515255 | orchestrator | skipping: [testbed-node-1] 2026-01-17 00:51:05.515260 | orchestrator | skipping: [testbed-node-2] 2026-01-17 00:51:05.515271 | orchestrator | skipping: [testbed-node-3] 2026-01-17 00:51:05.515277 | orchestrator | skipping: [testbed-node-4] 2026-01-17 00:51:05.515284 | orchestrator | skipping: [testbed-node-5] 2026-01-17 00:51:05.515290 | orchestrator | 2026-01-17 00:51:05.515296 | orchestrator | TASK [openvswitch : Ensuring config directories exist] ************************* 2026-01-17 00:51:05.515303 | orchestrator | Saturday 17 January 2026 00:50:10 +0000 (0:00:01.175) 0:00:13.268 ****** 2026-01-17 00:51:05.515329 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-01-17 00:51:05.515338 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-01-17 00:51:05.515344 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-01-17 00:51:05.515351 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-01-17 00:51:05.515358 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-01-17 00:51:05.515369 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-01-17 00:51:05.515383 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-01-17 00:51:05.515391 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-01-17 00:51:05.515398 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-01-17 00:51:05.515404 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-01-17 00:51:05.515431 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-01-17 00:51:05.515445 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-01-17 00:51:05.515452 | orchestrator | 2026-01-17 00:51:05.515460 | orchestrator | TASK [openvswitch : Copying over config.json files for services] *************** 2026-01-17 00:51:05.515467 | orchestrator | Saturday 17 January 2026 00:50:12 +0000 (0:00:02.832) 0:00:16.101 ****** 2026-01-17 00:51:05.515475 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-01-17 00:51:05.515483 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-01-17 00:51:05.515495 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-01-17 00:51:05.515502 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-01-17 00:51:05.515514 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-01-17 00:51:05.515531 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-01-17 00:51:05.515542 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-01-17 00:51:05.515549 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-01-17 00:51:05.515556 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-01-17 00:51:05.515563 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-01-17 00:51:05.515575 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-01-17 00:51:05.515590 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-01-17 00:51:05.515597 | orchestrator | 2026-01-17 00:51:05.515604 | orchestrator | TASK [openvswitch : Copying over ovs-vsctl wrapper] **************************** 2026-01-17 00:51:05.515611 | orchestrator | Saturday 17 January 2026 00:50:16 +0000 (0:00:03.418) 0:00:19.519 ****** 2026-01-17 00:51:05.515617 | orchestrator | skipping: [testbed-node-0] 2026-01-17 00:51:05.515623 | orchestrator | skipping: [testbed-node-1] 2026-01-17 00:51:05.515630 | orchestrator | skipping: [testbed-node-2] 2026-01-17 00:51:05.515637 | orchestrator | skipping: [testbed-node-3] 2026-01-17 00:51:05.515644 | orchestrator | skipping: [testbed-node-4] 2026-01-17 00:51:05.515650 | orchestrator | skipping: [testbed-node-5] 2026-01-17 00:51:05.515657 | orchestrator | 2026-01-17 00:51:05.515663 | orchestrator | TASK [openvswitch : Check openvswitch containers] ****************************** 2026-01-17 00:51:05.515669 | orchestrator | Saturday 17 January 2026 00:50:17 +0000 (0:00:01.366) 0:00:20.886 ****** 2026-01-17 00:51:05.515676 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-01-17 00:51:05.515683 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-01-17 00:51:05.515693 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-01-17 00:51:05.515700 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-01-17 00:51:05.515720 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-01-17 00:51:05.515727 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-01-17 00:51:05.515809 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-01-17 00:51:05.515818 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-01-17 00:51:05.515830 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-01-17 00:51:05.515837 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-01-17 00:51:05.515852 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-01-17 00:51:05.515859 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-01-17 00:51:05.515865 | orchestrator | 2026-01-17 00:51:05.515872 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-01-17 00:51:05.515878 | orchestrator | Saturday 17 January 2026 00:50:20 +0000 (0:00:02.572) 0:00:23.458 ****** 2026-01-17 00:51:05.515884 | orchestrator | 2026-01-17 00:51:05.515891 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-01-17 00:51:05.515897 | orchestrator | Saturday 17 January 2026 00:50:20 +0000 (0:00:00.296) 0:00:23.754 ****** 2026-01-17 00:51:05.515903 | orchestrator | 2026-01-17 00:51:05.515910 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-01-17 00:51:05.515915 | orchestrator | Saturday 17 January 2026 00:50:20 +0000 (0:00:00.138) 0:00:23.893 ****** 2026-01-17 00:51:05.515928 | orchestrator | 2026-01-17 00:51:05.515934 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-01-17 00:51:05.515940 | orchestrator | Saturday 17 January 2026 00:50:20 +0000 (0:00:00.195) 0:00:24.088 ****** 2026-01-17 00:51:05.515947 | orchestrator | 2026-01-17 00:51:05.515953 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-01-17 00:51:05.515959 | orchestrator | Saturday 17 January 2026 00:50:21 +0000 (0:00:00.133) 0:00:24.221 ****** 2026-01-17 00:51:05.515965 | orchestrator | 2026-01-17 00:51:05.515972 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-01-17 00:51:05.515978 | orchestrator | Saturday 17 January 2026 00:50:21 +0000 (0:00:00.125) 0:00:24.347 ****** 2026-01-17 00:51:05.515984 | orchestrator | 2026-01-17 00:51:05.515991 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-db-server container] ******** 2026-01-17 00:51:05.515997 | orchestrator | Saturday 17 January 2026 00:50:21 +0000 (0:00:00.255) 0:00:24.602 ****** 2026-01-17 00:51:05.516004 | orchestrator | changed: [testbed-node-3] 2026-01-17 00:51:05.516010 | orchestrator | changed: [testbed-node-5] 2026-01-17 00:51:05.516017 | orchestrator | changed: [testbed-node-4] 2026-01-17 00:51:05.516023 | orchestrator | changed: [testbed-node-0] 2026-01-17 00:51:05.516030 | orchestrator | changed: [testbed-node-2] 2026-01-17 00:51:05.516036 | orchestrator | changed: [testbed-node-1] 2026-01-17 00:51:05.516042 | orchestrator | 2026-01-17 00:51:05.516048 | orchestrator | RUNNING HANDLER [openvswitch : Waiting for openvswitch_db service to be ready] *** 2026-01-17 00:51:05.516055 | orchestrator | Saturday 17 January 2026 00:50:31 +0000 (0:00:10.040) 0:00:34.643 ****** 2026-01-17 00:51:05.516061 | orchestrator | ok: [testbed-node-0] 2026-01-17 00:51:05.516067 | orchestrator | ok: [testbed-node-1] 2026-01-17 00:51:05.516073 | orchestrator | ok: [testbed-node-2] 2026-01-17 00:51:05.516080 | orchestrator | ok: [testbed-node-3] 2026-01-17 00:51:05.516086 | orchestrator | ok: [testbed-node-4] 2026-01-17 00:51:05.516092 | orchestrator | ok: [testbed-node-5] 2026-01-17 00:51:05.516098 | orchestrator | 2026-01-17 00:51:05.516104 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-vswitchd container] ********* 2026-01-17 00:51:05.516110 | orchestrator | Saturday 17 January 2026 00:50:32 +0000 (0:00:01.441) 0:00:36.084 ****** 2026-01-17 00:51:05.516116 | orchestrator | changed: [testbed-node-0] 2026-01-17 00:51:05.516122 | orchestrator | changed: [testbed-node-2] 2026-01-17 00:51:05.516129 | orchestrator | changed: [testbed-node-1] 2026-01-17 00:51:05.516135 | orchestrator | changed: [testbed-node-3] 2026-01-17 00:51:05.516142 | orchestrator | changed: [testbed-node-4] 2026-01-17 00:51:05.516148 | orchestrator | changed: [testbed-node-5] 2026-01-17 00:51:05.516154 | orchestrator | 2026-01-17 00:51:05.516161 | orchestrator | TASK [openvswitch : Set system-id, hostname and hw-offload] ******************** 2026-01-17 00:51:05.516167 | orchestrator | Saturday 17 January 2026 00:50:40 +0000 (0:00:07.237) 0:00:43.322 ****** 2026-01-17 00:51:05.516174 | orchestrator | changed: [testbed-node-1] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-1'}) 2026-01-17 00:51:05.516180 | orchestrator | changed: [testbed-node-0] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-0'}) 2026-01-17 00:51:05.516186 | orchestrator | changed: [testbed-node-2] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-2'}) 2026-01-17 00:51:05.516193 | orchestrator | changed: [testbed-node-4] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-4'}) 2026-01-17 00:51:05.516199 | orchestrator | changed: [testbed-node-3] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-3'}) 2026-01-17 00:51:05.516209 | orchestrator | changed: [testbed-node-5] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-5'}) 2026-01-17 00:51:05.516215 | orchestrator | changed: [testbed-node-1] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-1'}) 2026-01-17 00:51:05.516222 | orchestrator | changed: [testbed-node-0] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-0'}) 2026-01-17 00:51:05.516235 | orchestrator | changed: [testbed-node-2] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-2'}) 2026-01-17 00:51:05.516242 | orchestrator | changed: [testbed-node-4] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-4'}) 2026-01-17 00:51:05.516248 | orchestrator | changed: [testbed-node-5] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-5'}) 2026-01-17 00:51:05.516254 | orchestrator | changed: [testbed-node-3] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-3'}) 2026-01-17 00:51:05.516260 | orchestrator | ok: [testbed-node-1] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-01-17 00:51:05.516266 | orchestrator | ok: [testbed-node-0] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-01-17 00:51:05.516273 | orchestrator | ok: [testbed-node-2] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-01-17 00:51:05.516279 | orchestrator | ok: [testbed-node-4] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-01-17 00:51:05.516286 | orchestrator | ok: [testbed-node-5] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-01-17 00:51:05.516292 | orchestrator | ok: [testbed-node-3] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-01-17 00:51:05.516298 | orchestrator | 2026-01-17 00:51:05.516305 | orchestrator | TASK [openvswitch : Ensuring OVS bridge is properly setup] ********************* 2026-01-17 00:51:05.516311 | orchestrator | Saturday 17 January 2026 00:50:48 +0000 (0:00:08.615) 0:00:51.937 ****** 2026-01-17 00:51:05.516318 | orchestrator | skipping: [testbed-node-3] => (item=br-ex)  2026-01-17 00:51:05.516324 | orchestrator | skipping: [testbed-node-3] 2026-01-17 00:51:05.516330 | orchestrator | skipping: [testbed-node-4] => (item=br-ex)  2026-01-17 00:51:05.516337 | orchestrator | skipping: [testbed-node-4] 2026-01-17 00:51:05.516343 | orchestrator | skipping: [testbed-node-5] => (item=br-ex)  2026-01-17 00:51:05.516349 | orchestrator | skipping: [testbed-node-5] 2026-01-17 00:51:05.516355 | orchestrator | changed: [testbed-node-0] => (item=br-ex) 2026-01-17 00:51:05.516361 | orchestrator | changed: [testbed-node-1] => (item=br-ex) 2026-01-17 00:51:05.516368 | orchestrator | changed: [testbed-node-2] => (item=br-ex) 2026-01-17 00:51:05.516373 | orchestrator | 2026-01-17 00:51:05.516380 | orchestrator | TASK [openvswitch : Ensuring OVS ports are properly setup] ********************* 2026-01-17 00:51:05.516386 | orchestrator | Saturday 17 January 2026 00:50:51 +0000 (0:00:02.571) 0:00:54.509 ****** 2026-01-17 00:51:05.516393 | orchestrator | skipping: [testbed-node-3] => (item=['br-ex', 'vxlan0'])  2026-01-17 00:51:05.516399 | orchestrator | skipping: [testbed-node-3] 2026-01-17 00:51:05.516455 | orchestrator | skipping: [testbed-node-4] => (item=['br-ex', 'vxlan0'])  2026-01-17 00:51:05.516463 | orchestrator | skipping: [testbed-node-4] 2026-01-17 00:51:05.516470 | orchestrator | skipping: [testbed-node-5] => (item=['br-ex', 'vxlan0'])  2026-01-17 00:51:05.516477 | orchestrator | skipping: [testbed-node-5] 2026-01-17 00:51:05.516483 | orchestrator | changed: [testbed-node-0] => (item=['br-ex', 'vxlan0']) 2026-01-17 00:51:05.516490 | orchestrator | changed: [testbed-node-1] => (item=['br-ex', 'vxlan0']) 2026-01-17 00:51:05.516496 | orchestrator | changed: [testbed-node-2] => (item=['br-ex', 'vxlan0']) 2026-01-17 00:51:05.516503 | orchestrator | 2026-01-17 00:51:05.516509 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-vswitchd container] ********* 2026-01-17 00:51:05.516516 | orchestrator | Saturday 17 January 2026 00:50:54 +0000 (0:00:03.369) 0:00:57.878 ****** 2026-01-17 00:51:05.516522 | orchestrator | changed: [testbed-node-1] 2026-01-17 00:51:05.516528 | orchestrator | changed: [testbed-node-0] 2026-01-17 00:51:05.516535 | orchestrator | changed: [testbed-node-2] 2026-01-17 00:51:05.516541 | orchestrator | changed: [testbed-node-3] 2026-01-17 00:51:05.516546 | orchestrator | changed: [testbed-node-5] 2026-01-17 00:51:05.516553 | orchestrator | changed: [testbed-node-4] 2026-01-17 00:51:05.516564 | orchestrator | 2026-01-17 00:51:05.516570 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-17 00:51:05.516577 | orchestrator | testbed-node-0 : ok=15  changed=11  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-01-17 00:51:05.516585 | orchestrator | testbed-node-1 : ok=15  changed=11  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-01-17 00:51:05.516591 | orchestrator | testbed-node-2 : ok=15  changed=11  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-01-17 00:51:05.516598 | orchestrator | testbed-node-3 : ok=13  changed=9  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-01-17 00:51:05.516605 | orchestrator | testbed-node-4 : ok=13  changed=9  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-01-17 00:51:05.516615 | orchestrator | testbed-node-5 : ok=13  changed=9  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-01-17 00:51:05.516622 | orchestrator | 2026-01-17 00:51:05.516629 | orchestrator | 2026-01-17 00:51:05.516636 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-17 00:51:05.516643 | orchestrator | Saturday 17 January 2026 00:51:03 +0000 (0:00:08.954) 0:01:06.832 ****** 2026-01-17 00:51:05.516650 | orchestrator | =============================================================================== 2026-01-17 00:51:05.516661 | orchestrator | openvswitch : Restart openvswitch-vswitchd container ------------------- 16.19s 2026-01-17 00:51:05.516668 | orchestrator | openvswitch : Restart openvswitch-db-server container ------------------ 10.04s 2026-01-17 00:51:05.516674 | orchestrator | openvswitch : Set system-id, hostname and hw-offload -------------------- 8.62s 2026-01-17 00:51:05.516681 | orchestrator | openvswitch : Copying over config.json files for services --------------- 3.42s 2026-01-17 00:51:05.516687 | orchestrator | openvswitch : Ensuring OVS ports are properly setup --------------------- 3.37s 2026-01-17 00:51:05.516694 | orchestrator | openvswitch : Ensuring config directories exist ------------------------- 2.83s 2026-01-17 00:51:05.516700 | orchestrator | openvswitch : Check openvswitch containers ------------------------------ 2.57s 2026-01-17 00:51:05.516707 | orchestrator | openvswitch : Ensuring OVS bridge is properly setup --------------------- 2.57s 2026-01-17 00:51:05.516713 | orchestrator | module-load : Drop module persistence ----------------------------------- 2.36s 2026-01-17 00:51:05.516720 | orchestrator | openvswitch : include_tasks --------------------------------------------- 2.27s 2026-01-17 00:51:05.516726 | orchestrator | module-load : Persist modules via modules-load.d ------------------------ 2.14s 2026-01-17 00:51:05.516733 | orchestrator | module-load : Load modules ---------------------------------------------- 2.13s 2026-01-17 00:51:05.516739 | orchestrator | Group hosts based on enabled services ----------------------------------- 1.48s 2026-01-17 00:51:05.516746 | orchestrator | openvswitch : Waiting for openvswitch_db service to be ready ------------ 1.44s 2026-01-17 00:51:05.516752 | orchestrator | openvswitch : Copying over ovs-vsctl wrapper ---------------------------- 1.37s 2026-01-17 00:51:05.516759 | orchestrator | Group hosts based on Kolla action --------------------------------------- 1.33s 2026-01-17 00:51:05.516764 | orchestrator | openvswitch : Create /run/openvswitch directory on host ----------------- 1.18s 2026-01-17 00:51:05.516770 | orchestrator | openvswitch : Flush Handlers -------------------------------------------- 1.14s 2026-01-17 00:51:05.516777 | orchestrator | 2026-01-17 00:51:05 | INFO  | Task 7002a706-1e96-40e0-a78c-3e70f3715b86 is in state STARTED 2026-01-17 00:51:05.516784 | orchestrator | 2026-01-17 00:51:05 | INFO  | Task 59e97358-19ce-494e-9924-564a00a1f55f is in state STARTED 2026-01-17 00:51:05.516949 | orchestrator | 2026-01-17 00:51:05 | INFO  | Task 1a06fc1e-dccf-48c7-9307-96e1c4567bb9 is in state STARTED 2026-01-17 00:51:05.518297 | orchestrator | 2026-01-17 00:51:05 | INFO  | Task 154fae25-cd35-4438-bf9a-016d609d7368 is in state STARTED 2026-01-17 00:51:05.518323 | orchestrator | 2026-01-17 00:51:05 | INFO  | Wait 1 second(s) until the next check 2026-01-17 00:51:08.563325 | orchestrator | 2026-01-17 00:51:08 | INFO  | Task e8c6e5e5-e14a-4a9e-a291-774a61f042df is in state STARTED 2026-01-17 00:51:08.564842 | orchestrator | 2026-01-17 00:51:08 | INFO  | Task 7002a706-1e96-40e0-a78c-3e70f3715b86 is in state STARTED 2026-01-17 00:51:08.566092 | orchestrator | 2026-01-17 00:51:08 | INFO  | Task 59e97358-19ce-494e-9924-564a00a1f55f is in state STARTED 2026-01-17 00:51:08.567466 | orchestrator | 2026-01-17 00:51:08 | INFO  | Task 1a06fc1e-dccf-48c7-9307-96e1c4567bb9 is in state STARTED 2026-01-17 00:51:08.568861 | orchestrator | 2026-01-17 00:51:08 | INFO  | Task 154fae25-cd35-4438-bf9a-016d609d7368 is in state STARTED 2026-01-17 00:51:08.569997 | orchestrator | 2026-01-17 00:51:08 | INFO  | Wait 1 second(s) until the next check 2026-01-17 00:51:11.606730 | orchestrator | 2026-01-17 00:51:11 | INFO  | Task e8c6e5e5-e14a-4a9e-a291-774a61f042df is in state STARTED 2026-01-17 00:51:11.607691 | orchestrator | 2026-01-17 00:51:11 | INFO  | Task 7002a706-1e96-40e0-a78c-3e70f3715b86 is in state STARTED 2026-01-17 00:51:11.608738 | orchestrator | 2026-01-17 00:51:11 | INFO  | Task 59e97358-19ce-494e-9924-564a00a1f55f is in state STARTED 2026-01-17 00:51:11.610201 | orchestrator | 2026-01-17 00:51:11 | INFO  | Task 1a06fc1e-dccf-48c7-9307-96e1c4567bb9 is in state STARTED 2026-01-17 00:51:11.611385 | orchestrator | 2026-01-17 00:51:11 | INFO  | Task 154fae25-cd35-4438-bf9a-016d609d7368 is in state STARTED 2026-01-17 00:51:11.611557 | orchestrator | 2026-01-17 00:51:11 | INFO  | Wait 1 second(s) until the next check 2026-01-17 00:51:14.640551 | orchestrator | 2026-01-17 00:51:14 | INFO  | Task e8c6e5e5-e14a-4a9e-a291-774a61f042df is in state STARTED 2026-01-17 00:51:14.641573 | orchestrator | 2026-01-17 00:51:14 | INFO  | Task 7002a706-1e96-40e0-a78c-3e70f3715b86 is in state STARTED 2026-01-17 00:51:14.642767 | orchestrator | 2026-01-17 00:51:14 | INFO  | Task 59e97358-19ce-494e-9924-564a00a1f55f is in state STARTED 2026-01-17 00:51:14.644536 | orchestrator | 2026-01-17 00:51:14 | INFO  | Task 1a06fc1e-dccf-48c7-9307-96e1c4567bb9 is in state STARTED 2026-01-17 00:51:14.645524 | orchestrator | 2026-01-17 00:51:14 | INFO  | Task 154fae25-cd35-4438-bf9a-016d609d7368 is in state STARTED 2026-01-17 00:51:14.645580 | orchestrator | 2026-01-17 00:51:14 | INFO  | Wait 1 second(s) until the next check 2026-01-17 00:51:17.683861 | orchestrator | 2026-01-17 00:51:17 | INFO  | Task e8c6e5e5-e14a-4a9e-a291-774a61f042df is in state STARTED 2026-01-17 00:51:17.684305 | orchestrator | 2026-01-17 00:51:17 | INFO  | Task 7002a706-1e96-40e0-a78c-3e70f3715b86 is in state STARTED 2026-01-17 00:51:17.685349 | orchestrator | 2026-01-17 00:51:17 | INFO  | Task 59e97358-19ce-494e-9924-564a00a1f55f is in state STARTED 2026-01-17 00:51:17.687836 | orchestrator | 2026-01-17 00:51:17 | INFO  | Task 1a06fc1e-dccf-48c7-9307-96e1c4567bb9 is in state STARTED 2026-01-17 00:51:17.688790 | orchestrator | 2026-01-17 00:51:17 | INFO  | Task 154fae25-cd35-4438-bf9a-016d609d7368 is in state STARTED 2026-01-17 00:51:17.692322 | orchestrator | 2026-01-17 00:51:17 | INFO  | Wait 1 second(s) until the next check 2026-01-17 00:51:20.732867 | orchestrator | 2026-01-17 00:51:20 | INFO  | Task e8c6e5e5-e14a-4a9e-a291-774a61f042df is in state STARTED 2026-01-17 00:51:20.733198 | orchestrator | 2026-01-17 00:51:20 | INFO  | Task 7002a706-1e96-40e0-a78c-3e70f3715b86 is in state STARTED 2026-01-17 00:51:20.734286 | orchestrator | 2026-01-17 00:51:20 | INFO  | Task 59e97358-19ce-494e-9924-564a00a1f55f is in state STARTED 2026-01-17 00:51:20.735091 | orchestrator | 2026-01-17 00:51:20 | INFO  | Task 1a06fc1e-dccf-48c7-9307-96e1c4567bb9 is in state STARTED 2026-01-17 00:51:20.735819 | orchestrator | 2026-01-17 00:51:20 | INFO  | Task 154fae25-cd35-4438-bf9a-016d609d7368 is in state STARTED 2026-01-17 00:51:20.735842 | orchestrator | 2026-01-17 00:51:20 | INFO  | Wait 1 second(s) until the next check 2026-01-17 00:51:23.777322 | orchestrator | 2026-01-17 00:51:23 | INFO  | Task e8c6e5e5-e14a-4a9e-a291-774a61f042df is in state STARTED 2026-01-17 00:51:23.777758 | orchestrator | 2026-01-17 00:51:23 | INFO  | Task 7002a706-1e96-40e0-a78c-3e70f3715b86 is in state STARTED 2026-01-17 00:51:23.778856 | orchestrator | 2026-01-17 00:51:23 | INFO  | Task 59e97358-19ce-494e-9924-564a00a1f55f is in state STARTED 2026-01-17 00:51:23.781827 | orchestrator | 2026-01-17 00:51:23 | INFO  | Task 1a06fc1e-dccf-48c7-9307-96e1c4567bb9 is in state STARTED 2026-01-17 00:51:23.782534 | orchestrator | 2026-01-17 00:51:23 | INFO  | Task 154fae25-cd35-4438-bf9a-016d609d7368 is in state STARTED 2026-01-17 00:51:23.782568 | orchestrator | 2026-01-17 00:51:23 | INFO  | Wait 1 second(s) until the next check 2026-01-17 00:51:26.838202 | orchestrator | 2026-01-17 00:51:26 | INFO  | Task e8c6e5e5-e14a-4a9e-a291-774a61f042df is in state STARTED 2026-01-17 00:51:26.844541 | orchestrator | 2026-01-17 00:51:26 | INFO  | Task 7002a706-1e96-40e0-a78c-3e70f3715b86 is in state STARTED 2026-01-17 00:51:26.844624 | orchestrator | 2026-01-17 00:51:26 | INFO  | Task 59e97358-19ce-494e-9924-564a00a1f55f is in state STARTED 2026-01-17 00:51:26.844634 | orchestrator | 2026-01-17 00:51:26 | INFO  | Task 1a06fc1e-dccf-48c7-9307-96e1c4567bb9 is in state STARTED 2026-01-17 00:51:26.844639 | orchestrator | 2026-01-17 00:51:26 | INFO  | Task 154fae25-cd35-4438-bf9a-016d609d7368 is in state STARTED 2026-01-17 00:51:26.844645 | orchestrator | 2026-01-17 00:51:26 | INFO  | Wait 1 second(s) until the next check 2026-01-17 00:51:29.878379 | orchestrator | 2026-01-17 00:51:29 | INFO  | Task e8c6e5e5-e14a-4a9e-a291-774a61f042df is in state STARTED 2026-01-17 00:51:29.881785 | orchestrator | 2026-01-17 00:51:29 | INFO  | Task 7002a706-1e96-40e0-a78c-3e70f3715b86 is in state STARTED 2026-01-17 00:51:29.884813 | orchestrator | 2026-01-17 00:51:29 | INFO  | Task 59e97358-19ce-494e-9924-564a00a1f55f is in state STARTED 2026-01-17 00:51:29.887631 | orchestrator | 2026-01-17 00:51:29 | INFO  | Task 1a06fc1e-dccf-48c7-9307-96e1c4567bb9 is in state STARTED 2026-01-17 00:51:29.889834 | orchestrator | 2026-01-17 00:51:29 | INFO  | Task 154fae25-cd35-4438-bf9a-016d609d7368 is in state STARTED 2026-01-17 00:51:29.889911 | orchestrator | 2026-01-17 00:51:29 | INFO  | Wait 1 second(s) until the next check 2026-01-17 00:51:32.919380 | orchestrator | 2026-01-17 00:51:32 | INFO  | Task e8c6e5e5-e14a-4a9e-a291-774a61f042df is in state STARTED 2026-01-17 00:51:32.922590 | orchestrator | 2026-01-17 00:51:32 | INFO  | Task 7002a706-1e96-40e0-a78c-3e70f3715b86 is in state STARTED 2026-01-17 00:51:32.924690 | orchestrator | 2026-01-17 00:51:32 | INFO  | Task 59e97358-19ce-494e-9924-564a00a1f55f is in state STARTED 2026-01-17 00:51:32.926757 | orchestrator | 2026-01-17 00:51:32 | INFO  | Task 1a06fc1e-dccf-48c7-9307-96e1c4567bb9 is in state STARTED 2026-01-17 00:51:32.928561 | orchestrator | 2026-01-17 00:51:32 | INFO  | Task 154fae25-cd35-4438-bf9a-016d609d7368 is in state STARTED 2026-01-17 00:51:32.928630 | orchestrator | 2026-01-17 00:51:32 | INFO  | Wait 1 second(s) until the next check 2026-01-17 00:51:35.989241 | orchestrator | 2026-01-17 00:51:35 | INFO  | Task e8c6e5e5-e14a-4a9e-a291-774a61f042df is in state STARTED 2026-01-17 00:51:35.989488 | orchestrator | 2026-01-17 00:51:35 | INFO  | Task 7002a706-1e96-40e0-a78c-3e70f3715b86 is in state STARTED 2026-01-17 00:51:35.990616 | orchestrator | 2026-01-17 00:51:35 | INFO  | Task 59e97358-19ce-494e-9924-564a00a1f55f is in state STARTED 2026-01-17 00:51:35.992799 | orchestrator | 2026-01-17 00:51:35 | INFO  | Task 1a06fc1e-dccf-48c7-9307-96e1c4567bb9 is in state STARTED 2026-01-17 00:51:35.993670 | orchestrator | 2026-01-17 00:51:35 | INFO  | Task 154fae25-cd35-4438-bf9a-016d609d7368 is in state STARTED 2026-01-17 00:51:35.993705 | orchestrator | 2026-01-17 00:51:35 | INFO  | Wait 1 second(s) until the next check 2026-01-17 00:51:39.033041 | orchestrator | 2026-01-17 00:51:39 | INFO  | Task e8c6e5e5-e14a-4a9e-a291-774a61f042df is in state STARTED 2026-01-17 00:51:39.033824 | orchestrator | 2026-01-17 00:51:39 | INFO  | Task 7002a706-1e96-40e0-a78c-3e70f3715b86 is in state STARTED 2026-01-17 00:51:39.034805 | orchestrator | 2026-01-17 00:51:39 | INFO  | Task 59e97358-19ce-494e-9924-564a00a1f55f is in state STARTED 2026-01-17 00:51:39.036168 | orchestrator | 2026-01-17 00:51:39 | INFO  | Task 1a06fc1e-dccf-48c7-9307-96e1c4567bb9 is in state STARTED 2026-01-17 00:51:39.037297 | orchestrator | 2026-01-17 00:51:39 | INFO  | Task 154fae25-cd35-4438-bf9a-016d609d7368 is in state STARTED 2026-01-17 00:51:39.037346 | orchestrator | 2026-01-17 00:51:39 | INFO  | Wait 1 second(s) until the next check 2026-01-17 00:51:42.163569 | orchestrator | 2026-01-17 00:51:42 | INFO  | Task e8c6e5e5-e14a-4a9e-a291-774a61f042df is in state STARTED 2026-01-17 00:51:42.163654 | orchestrator | 2026-01-17 00:51:42 | INFO  | Task 7002a706-1e96-40e0-a78c-3e70f3715b86 is in state STARTED 2026-01-17 00:51:42.163662 | orchestrator | 2026-01-17 00:51:42 | INFO  | Task 59e97358-19ce-494e-9924-564a00a1f55f is in state STARTED 2026-01-17 00:51:42.163668 | orchestrator | 2026-01-17 00:51:42 | INFO  | Task 1a06fc1e-dccf-48c7-9307-96e1c4567bb9 is in state STARTED 2026-01-17 00:51:42.163674 | orchestrator | 2026-01-17 00:51:42 | INFO  | Task 154fae25-cd35-4438-bf9a-016d609d7368 is in state STARTED 2026-01-17 00:51:42.163680 | orchestrator | 2026-01-17 00:51:42 | INFO  | Wait 1 second(s) until the next check 2026-01-17 00:51:45.306101 | orchestrator | 2026-01-17 00:51:45 | INFO  | Task e8c6e5e5-e14a-4a9e-a291-774a61f042df is in state STARTED 2026-01-17 00:51:45.306220 | orchestrator | 2026-01-17 00:51:45 | INFO  | Task 7002a706-1e96-40e0-a78c-3e70f3715b86 is in state STARTED 2026-01-17 00:51:45.306549 | orchestrator | 2026-01-17 00:51:45 | INFO  | Task 59e97358-19ce-494e-9924-564a00a1f55f is in state STARTED 2026-01-17 00:51:45.307084 | orchestrator | 2026-01-17 00:51:45 | INFO  | Task 1a06fc1e-dccf-48c7-9307-96e1c4567bb9 is in state STARTED 2026-01-17 00:51:45.307685 | orchestrator | 2026-01-17 00:51:45 | INFO  | Task 154fae25-cd35-4438-bf9a-016d609d7368 is in state STARTED 2026-01-17 00:51:45.307736 | orchestrator | 2026-01-17 00:51:45 | INFO  | Wait 1 second(s) until the next check 2026-01-17 00:51:48.704468 | orchestrator | 2026-01-17 00:51:48 | INFO  | Task e8c6e5e5-e14a-4a9e-a291-774a61f042df is in state STARTED 2026-01-17 00:51:48.704555 | orchestrator | 2026-01-17 00:51:48 | INFO  | Task 7002a706-1e96-40e0-a78c-3e70f3715b86 is in state STARTED 2026-01-17 00:51:48.704565 | orchestrator | 2026-01-17 00:51:48 | INFO  | Task 59e97358-19ce-494e-9924-564a00a1f55f is in state STARTED 2026-01-17 00:51:48.704571 | orchestrator | 2026-01-17 00:51:48 | INFO  | Task 1a06fc1e-dccf-48c7-9307-96e1c4567bb9 is in state STARTED 2026-01-17 00:51:48.704578 | orchestrator | 2026-01-17 00:51:48 | INFO  | Task 154fae25-cd35-4438-bf9a-016d609d7368 is in state STARTED 2026-01-17 00:51:48.704612 | orchestrator | 2026-01-17 00:51:48 | INFO  | Wait 1 second(s) until the next check 2026-01-17 00:51:51.545285 | orchestrator | 2026-01-17 00:51:51 | INFO  | Task e8c6e5e5-e14a-4a9e-a291-774a61f042df is in state STARTED 2026-01-17 00:51:51.545536 | orchestrator | 2026-01-17 00:51:51 | INFO  | Task 7002a706-1e96-40e0-a78c-3e70f3715b86 is in state STARTED 2026-01-17 00:51:51.546181 | orchestrator | 2026-01-17 00:51:51 | INFO  | Task 59e97358-19ce-494e-9924-564a00a1f55f is in state STARTED 2026-01-17 00:51:51.546561 | orchestrator | 2026-01-17 00:51:51 | INFO  | Task 1a06fc1e-dccf-48c7-9307-96e1c4567bb9 is in state STARTED 2026-01-17 00:51:51.548122 | orchestrator | 2026-01-17 00:51:51 | INFO  | Task 154fae25-cd35-4438-bf9a-016d609d7368 is in state STARTED 2026-01-17 00:51:51.548153 | orchestrator | 2026-01-17 00:51:51 | INFO  | Wait 1 second(s) until the next check 2026-01-17 00:51:54.575742 | orchestrator | 2026-01-17 00:51:54 | INFO  | Task e8c6e5e5-e14a-4a9e-a291-774a61f042df is in state STARTED 2026-01-17 00:51:54.579119 | orchestrator | 2026-01-17 00:51:54 | INFO  | Task 7002a706-1e96-40e0-a78c-3e70f3715b86 is in state STARTED 2026-01-17 00:51:54.595620 | orchestrator | 2026-01-17 00:51:54 | INFO  | Task 59e97358-19ce-494e-9924-564a00a1f55f is in state STARTED 2026-01-17 00:51:54.596506 | orchestrator | 2026-01-17 00:51:54 | INFO  | Task 1a06fc1e-dccf-48c7-9307-96e1c4567bb9 is in state STARTED 2026-01-17 00:51:54.597779 | orchestrator | 2026-01-17 00:51:54 | INFO  | Task 154fae25-cd35-4438-bf9a-016d609d7368 is in state STARTED 2026-01-17 00:51:54.597813 | orchestrator | 2026-01-17 00:51:54 | INFO  | Wait 1 second(s) until the next check 2026-01-17 00:51:57.644818 | orchestrator | 2026-01-17 00:51:57 | INFO  | Task e8c6e5e5-e14a-4a9e-a291-774a61f042df is in state SUCCESS 2026-01-17 00:51:57.646816 | orchestrator | 2026-01-17 00:51:57.646885 | orchestrator | 2026-01-17 00:51:57.646902 | orchestrator | PLAY [Prepare all k3s nodes] *************************************************** 2026-01-17 00:51:57.646914 | orchestrator | 2026-01-17 00:51:57.646926 | orchestrator | TASK [k3s_prereq : Validating arguments against arg spec 'main' - Prerequisites] *** 2026-01-17 00:51:57.646939 | orchestrator | Saturday 17 January 2026 00:47:27 +0000 (0:00:00.173) 0:00:00.173 ****** 2026-01-17 00:51:57.646950 | orchestrator | ok: [testbed-node-3] 2026-01-17 00:51:57.646961 | orchestrator | ok: [testbed-node-4] 2026-01-17 00:51:57.646973 | orchestrator | ok: [testbed-node-5] 2026-01-17 00:51:57.646985 | orchestrator | ok: [testbed-node-0] 2026-01-17 00:51:57.646998 | orchestrator | ok: [testbed-node-1] 2026-01-17 00:51:57.647009 | orchestrator | ok: [testbed-node-2] 2026-01-17 00:51:57.647021 | orchestrator | 2026-01-17 00:51:57.647032 | orchestrator | TASK [k3s_prereq : Set same timezone on every Server] ************************** 2026-01-17 00:51:57.647044 | orchestrator | Saturday 17 January 2026 00:47:28 +0000 (0:00:00.694) 0:00:00.868 ****** 2026-01-17 00:51:57.647056 | orchestrator | skipping: [testbed-node-3] 2026-01-17 00:51:57.647068 | orchestrator | skipping: [testbed-node-4] 2026-01-17 00:51:57.647079 | orchestrator | skipping: [testbed-node-5] 2026-01-17 00:51:57.647091 | orchestrator | skipping: [testbed-node-0] 2026-01-17 00:51:57.647116 | orchestrator | skipping: [testbed-node-1] 2026-01-17 00:51:57.647128 | orchestrator | skipping: [testbed-node-2] 2026-01-17 00:51:57.647141 | orchestrator | 2026-01-17 00:51:57.647154 | orchestrator | TASK [k3s_prereq : Set SELinux to disabled state] ****************************** 2026-01-17 00:51:57.647166 | orchestrator | Saturday 17 January 2026 00:47:28 +0000 (0:00:00.542) 0:00:01.410 ****** 2026-01-17 00:51:57.647179 | orchestrator | skipping: [testbed-node-3] 2026-01-17 00:51:57.647190 | orchestrator | skipping: [testbed-node-4] 2026-01-17 00:51:57.647203 | orchestrator | skipping: [testbed-node-5] 2026-01-17 00:51:57.647214 | orchestrator | skipping: [testbed-node-0] 2026-01-17 00:51:57.647225 | orchestrator | skipping: [testbed-node-1] 2026-01-17 00:51:57.647254 | orchestrator | skipping: [testbed-node-2] 2026-01-17 00:51:57.647266 | orchestrator | 2026-01-17 00:51:57.647277 | orchestrator | TASK [k3s_prereq : Enable IPv4 forwarding] ************************************* 2026-01-17 00:51:57.647289 | orchestrator | Saturday 17 January 2026 00:47:29 +0000 (0:00:00.588) 0:00:01.998 ****** 2026-01-17 00:51:57.647301 | orchestrator | changed: [testbed-node-0] 2026-01-17 00:51:57.647314 | orchestrator | changed: [testbed-node-4] 2026-01-17 00:51:57.647326 | orchestrator | changed: [testbed-node-3] 2026-01-17 00:51:57.647337 | orchestrator | changed: [testbed-node-1] 2026-01-17 00:51:57.647348 | orchestrator | changed: [testbed-node-2] 2026-01-17 00:51:57.647360 | orchestrator | changed: [testbed-node-5] 2026-01-17 00:51:57.647371 | orchestrator | 2026-01-17 00:51:57.647401 | orchestrator | TASK [k3s_prereq : Enable IPv6 forwarding] ************************************* 2026-01-17 00:51:57.647413 | orchestrator | Saturday 17 January 2026 00:47:31 +0000 (0:00:02.499) 0:00:04.498 ****** 2026-01-17 00:51:57.647425 | orchestrator | changed: [testbed-node-3] 2026-01-17 00:51:57.647437 | orchestrator | changed: [testbed-node-4] 2026-01-17 00:51:57.647449 | orchestrator | changed: [testbed-node-5] 2026-01-17 00:51:57.647460 | orchestrator | changed: [testbed-node-1] 2026-01-17 00:51:57.647472 | orchestrator | changed: [testbed-node-2] 2026-01-17 00:51:57.647483 | orchestrator | changed: [testbed-node-0] 2026-01-17 00:51:57.647496 | orchestrator | 2026-01-17 00:51:57.647508 | orchestrator | TASK [k3s_prereq : Enable IPv6 router advertisements] ************************** 2026-01-17 00:51:57.647520 | orchestrator | Saturday 17 January 2026 00:47:33 +0000 (0:00:01.948) 0:00:06.446 ****** 2026-01-17 00:51:57.647532 | orchestrator | changed: [testbed-node-3] 2026-01-17 00:51:57.647544 | orchestrator | changed: [testbed-node-4] 2026-01-17 00:51:57.647556 | orchestrator | changed: [testbed-node-5] 2026-01-17 00:51:57.647568 | orchestrator | changed: [testbed-node-0] 2026-01-17 00:51:57.647580 | orchestrator | changed: [testbed-node-1] 2026-01-17 00:51:57.647591 | orchestrator | changed: [testbed-node-2] 2026-01-17 00:51:57.647603 | orchestrator | 2026-01-17 00:51:57.647615 | orchestrator | TASK [k3s_prereq : Add br_netfilter to /etc/modules-load.d/] ******************* 2026-01-17 00:51:57.647628 | orchestrator | Saturday 17 January 2026 00:47:36 +0000 (0:00:02.309) 0:00:08.756 ****** 2026-01-17 00:51:57.647640 | orchestrator | skipping: [testbed-node-3] 2026-01-17 00:51:57.647651 | orchestrator | skipping: [testbed-node-4] 2026-01-17 00:51:57.647662 | orchestrator | skipping: [testbed-node-5] 2026-01-17 00:51:57.647674 | orchestrator | skipping: [testbed-node-0] 2026-01-17 00:51:57.647685 | orchestrator | skipping: [testbed-node-1] 2026-01-17 00:51:57.647696 | orchestrator | skipping: [testbed-node-2] 2026-01-17 00:51:57.647709 | orchestrator | 2026-01-17 00:51:57.647726 | orchestrator | TASK [k3s_prereq : Load br_netfilter] ****************************************** 2026-01-17 00:51:57.647746 | orchestrator | Saturday 17 January 2026 00:47:36 +0000 (0:00:00.814) 0:00:09.570 ****** 2026-01-17 00:51:57.647767 | orchestrator | skipping: [testbed-node-3] 2026-01-17 00:51:57.647782 | orchestrator | skipping: [testbed-node-4] 2026-01-17 00:51:57.647800 | orchestrator | skipping: [testbed-node-5] 2026-01-17 00:51:57.647821 | orchestrator | skipping: [testbed-node-0] 2026-01-17 00:51:57.647849 | orchestrator | skipping: [testbed-node-1] 2026-01-17 00:51:57.647862 | orchestrator | skipping: [testbed-node-2] 2026-01-17 00:51:57.647874 | orchestrator | 2026-01-17 00:51:57.647886 | orchestrator | TASK [k3s_prereq : Set bridge-nf-call-iptables (just to be sure)] ************** 2026-01-17 00:51:57.647899 | orchestrator | Saturday 17 January 2026 00:47:37 +0000 (0:00:00.513) 0:00:10.084 ****** 2026-01-17 00:51:57.647911 | orchestrator | skipping: [testbed-node-3] => (item=net.bridge.bridge-nf-call-iptables)  2026-01-17 00:51:57.647922 | orchestrator | skipping: [testbed-node-3] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-01-17 00:51:57.647934 | orchestrator | skipping: [testbed-node-3] 2026-01-17 00:51:57.647945 | orchestrator | skipping: [testbed-node-4] => (item=net.bridge.bridge-nf-call-iptables)  2026-01-17 00:51:57.647957 | orchestrator | skipping: [testbed-node-4] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-01-17 00:51:57.647969 | orchestrator | skipping: [testbed-node-4] 2026-01-17 00:51:57.647990 | orchestrator | skipping: [testbed-node-5] => (item=net.bridge.bridge-nf-call-iptables)  2026-01-17 00:51:57.648003 | orchestrator | skipping: [testbed-node-5] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-01-17 00:51:57.648015 | orchestrator | skipping: [testbed-node-5] 2026-01-17 00:51:57.648026 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-iptables)  2026-01-17 00:51:57.648054 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-01-17 00:51:57.648067 | orchestrator | skipping: [testbed-node-0] 2026-01-17 00:51:57.648126 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-iptables)  2026-01-17 00:51:57.648138 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-01-17 00:51:57.648149 | orchestrator | skipping: [testbed-node-1] 2026-01-17 00:51:57.648161 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-iptables)  2026-01-17 00:51:57.648173 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-01-17 00:51:57.648184 | orchestrator | skipping: [testbed-node-2] 2026-01-17 00:51:57.648196 | orchestrator | 2026-01-17 00:51:57.648207 | orchestrator | TASK [k3s_prereq : Add /usr/local/bin to sudo secure_path] ********************* 2026-01-17 00:51:57.648218 | orchestrator | Saturday 17 January 2026 00:47:38 +0000 (0:00:01.131) 0:00:11.216 ****** 2026-01-17 00:51:57.648229 | orchestrator | skipping: [testbed-node-3] 2026-01-17 00:51:57.648240 | orchestrator | skipping: [testbed-node-4] 2026-01-17 00:51:57.648252 | orchestrator | skipping: [testbed-node-5] 2026-01-17 00:51:57.648275 | orchestrator | skipping: [testbed-node-0] 2026-01-17 00:51:57.648297 | orchestrator | skipping: [testbed-node-1] 2026-01-17 00:51:57.648308 | orchestrator | skipping: [testbed-node-2] 2026-01-17 00:51:57.648320 | orchestrator | 2026-01-17 00:51:57.648333 | orchestrator | TASK [k3s_download : Validating arguments against arg spec 'main' - Manage the downloading of K3S binaries] *** 2026-01-17 00:51:57.648345 | orchestrator | Saturday 17 January 2026 00:47:40 +0000 (0:00:01.832) 0:00:13.048 ****** 2026-01-17 00:51:57.648357 | orchestrator | ok: [testbed-node-3] 2026-01-17 00:51:57.648364 | orchestrator | ok: [testbed-node-4] 2026-01-17 00:51:57.648371 | orchestrator | ok: [testbed-node-5] 2026-01-17 00:51:57.648421 | orchestrator | ok: [testbed-node-0] 2026-01-17 00:51:57.648428 | orchestrator | ok: [testbed-node-1] 2026-01-17 00:51:57.648435 | orchestrator | ok: [testbed-node-2] 2026-01-17 00:51:57.648441 | orchestrator | 2026-01-17 00:51:57.648448 | orchestrator | TASK [k3s_download : Download k3s binary x64] ********************************** 2026-01-17 00:51:57.648455 | orchestrator | Saturday 17 January 2026 00:47:42 +0000 (0:00:01.550) 0:00:14.599 ****** 2026-01-17 00:51:57.648461 | orchestrator | changed: [testbed-node-3] 2026-01-17 00:51:57.648468 | orchestrator | changed: [testbed-node-4] 2026-01-17 00:51:57.648475 | orchestrator | changed: [testbed-node-2] 2026-01-17 00:51:57.648482 | orchestrator | changed: [testbed-node-5] 2026-01-17 00:51:57.648488 | orchestrator | changed: [testbed-node-1] 2026-01-17 00:51:57.648495 | orchestrator | changed: [testbed-node-0] 2026-01-17 00:51:57.648501 | orchestrator | 2026-01-17 00:51:57.648508 | orchestrator | TASK [k3s_download : Download k3s binary arm64] ******************************** 2026-01-17 00:51:57.648514 | orchestrator | Saturday 17 January 2026 00:47:47 +0000 (0:00:05.124) 0:00:19.723 ****** 2026-01-17 00:51:57.648521 | orchestrator | skipping: [testbed-node-3] 2026-01-17 00:51:57.648527 | orchestrator | skipping: [testbed-node-4] 2026-01-17 00:51:57.648534 | orchestrator | skipping: [testbed-node-5] 2026-01-17 00:51:57.648541 | orchestrator | skipping: [testbed-node-0] 2026-01-17 00:51:57.648547 | orchestrator | skipping: [testbed-node-1] 2026-01-17 00:51:57.648554 | orchestrator | skipping: [testbed-node-2] 2026-01-17 00:51:57.648560 | orchestrator | 2026-01-17 00:51:57.648567 | orchestrator | TASK [k3s_download : Download k3s binary armhf] ******************************** 2026-01-17 00:51:57.648573 | orchestrator | Saturday 17 January 2026 00:47:49 +0000 (0:00:02.179) 0:00:21.903 ****** 2026-01-17 00:51:57.648580 | orchestrator | skipping: [testbed-node-3] 2026-01-17 00:51:57.648593 | orchestrator | skipping: [testbed-node-4] 2026-01-17 00:51:57.648600 | orchestrator | skipping: [testbed-node-5] 2026-01-17 00:51:57.648606 | orchestrator | skipping: [testbed-node-0] 2026-01-17 00:51:57.648613 | orchestrator | skipping: [testbed-node-2] 2026-01-17 00:51:57.648619 | orchestrator | skipping: [testbed-node-1] 2026-01-17 00:51:57.648626 | orchestrator | 2026-01-17 00:51:57.648633 | orchestrator | TASK [k3s_custom_registries : Validating arguments against arg spec 'main' - Configure the use of a custom container registry] *** 2026-01-17 00:51:57.648640 | orchestrator | Saturday 17 January 2026 00:47:51 +0000 (0:00:01.926) 0:00:23.829 ****** 2026-01-17 00:51:57.648647 | orchestrator | skipping: [testbed-node-3] 2026-01-17 00:51:57.648653 | orchestrator | skipping: [testbed-node-4] 2026-01-17 00:51:57.648660 | orchestrator | skipping: [testbed-node-5] 2026-01-17 00:51:57.648666 | orchestrator | skipping: [testbed-node-0] 2026-01-17 00:51:57.648673 | orchestrator | skipping: [testbed-node-1] 2026-01-17 00:51:57.648679 | orchestrator | skipping: [testbed-node-2] 2026-01-17 00:51:57.648686 | orchestrator | 2026-01-17 00:51:57.648693 | orchestrator | TASK [k3s_custom_registries : Create directory /etc/rancher/k3s] *************** 2026-01-17 00:51:57.648699 | orchestrator | Saturday 17 January 2026 00:47:52 +0000 (0:00:01.187) 0:00:25.016 ****** 2026-01-17 00:51:57.648706 | orchestrator | skipping: [testbed-node-3] => (item=rancher)  2026-01-17 00:51:57.648713 | orchestrator | skipping: [testbed-node-3] => (item=rancher/k3s)  2026-01-17 00:51:57.648720 | orchestrator | skipping: [testbed-node-3] 2026-01-17 00:51:57.648726 | orchestrator | skipping: [testbed-node-4] => (item=rancher)  2026-01-17 00:51:57.648733 | orchestrator | skipping: [testbed-node-4] => (item=rancher/k3s)  2026-01-17 00:51:57.648740 | orchestrator | skipping: [testbed-node-4] 2026-01-17 00:51:57.648746 | orchestrator | skipping: [testbed-node-5] => (item=rancher)  2026-01-17 00:51:57.648753 | orchestrator | skipping: [testbed-node-5] => (item=rancher/k3s)  2026-01-17 00:51:57.648759 | orchestrator | skipping: [testbed-node-5] 2026-01-17 00:51:57.648766 | orchestrator | skipping: [testbed-node-0] => (item=rancher)  2026-01-17 00:51:57.648772 | orchestrator | skipping: [testbed-node-0] => (item=rancher/k3s)  2026-01-17 00:51:57.648779 | orchestrator | skipping: [testbed-node-0] 2026-01-17 00:51:57.648786 | orchestrator | skipping: [testbed-node-1] => (item=rancher)  2026-01-17 00:51:57.648792 | orchestrator | skipping: [testbed-node-1] => (item=rancher/k3s)  2026-01-17 00:51:57.648799 | orchestrator | skipping: [testbed-node-1] 2026-01-17 00:51:57.648805 | orchestrator | skipping: [testbed-node-2] => (item=rancher)  2026-01-17 00:51:57.648812 | orchestrator | skipping: [testbed-node-2] => (item=rancher/k3s)  2026-01-17 00:51:57.648819 | orchestrator | skipping: [testbed-node-2] 2026-01-17 00:51:57.648825 | orchestrator | 2026-01-17 00:51:57.648832 | orchestrator | TASK [k3s_custom_registries : Insert registries into /etc/rancher/k3s/registries.yaml] *** 2026-01-17 00:51:57.648846 | orchestrator | Saturday 17 January 2026 00:47:53 +0000 (0:00:00.698) 0:00:25.715 ****** 2026-01-17 00:51:57.648854 | orchestrator | skipping: [testbed-node-3] 2026-01-17 00:51:57.648860 | orchestrator | skipping: [testbed-node-4] 2026-01-17 00:51:57.648867 | orchestrator | skipping: [testbed-node-5] 2026-01-17 00:51:57.648873 | orchestrator | skipping: [testbed-node-0] 2026-01-17 00:51:57.648880 | orchestrator | skipping: [testbed-node-1] 2026-01-17 00:51:57.648886 | orchestrator | skipping: [testbed-node-2] 2026-01-17 00:51:57.648893 | orchestrator | 2026-01-17 00:51:57.648900 | orchestrator | TASK [k3s_custom_registries : Remove /etc/rancher/k3s/registries.yaml when no registries configured] *** 2026-01-17 00:51:57.648906 | orchestrator | Saturday 17 January 2026 00:47:53 +0000 (0:00:00.790) 0:00:26.505 ****** 2026-01-17 00:51:57.648913 | orchestrator | skipping: [testbed-node-3] 2026-01-17 00:51:57.648919 | orchestrator | skipping: [testbed-node-4] 2026-01-17 00:51:57.648926 | orchestrator | skipping: [testbed-node-5] 2026-01-17 00:51:57.648933 | orchestrator | skipping: [testbed-node-0] 2026-01-17 00:51:57.648940 | orchestrator | skipping: [testbed-node-1] 2026-01-17 00:51:57.648946 | orchestrator | skipping: [testbed-node-2] 2026-01-17 00:51:57.648957 | orchestrator | 2026-01-17 00:51:57.648963 | orchestrator | PLAY [Deploy k3s master nodes] ************************************************* 2026-01-17 00:51:57.648970 | orchestrator | 2026-01-17 00:51:57.648977 | orchestrator | TASK [k3s_server : Validating arguments against arg spec 'main' - Setup k3s servers] *** 2026-01-17 00:51:57.648983 | orchestrator | Saturday 17 January 2026 00:47:55 +0000 (0:00:01.455) 0:00:27.961 ****** 2026-01-17 00:51:57.648990 | orchestrator | ok: [testbed-node-0] 2026-01-17 00:51:57.648996 | orchestrator | ok: [testbed-node-1] 2026-01-17 00:51:57.649003 | orchestrator | ok: [testbed-node-2] 2026-01-17 00:51:57.649009 | orchestrator | 2026-01-17 00:51:57.649016 | orchestrator | TASK [k3s_server : Stop k3s-init] ********************************************** 2026-01-17 00:51:57.649022 | orchestrator | Saturday 17 January 2026 00:47:56 +0000 (0:00:01.430) 0:00:29.391 ****** 2026-01-17 00:51:57.649029 | orchestrator | ok: [testbed-node-2] 2026-01-17 00:51:57.649036 | orchestrator | ok: [testbed-node-0] 2026-01-17 00:51:57.649042 | orchestrator | ok: [testbed-node-1] 2026-01-17 00:51:57.649049 | orchestrator | 2026-01-17 00:51:57.649055 | orchestrator | TASK [k3s_server : Stop k3s] *************************************************** 2026-01-17 00:51:57.649062 | orchestrator | Saturday 17 January 2026 00:47:58 +0000 (0:00:01.243) 0:00:30.635 ****** 2026-01-17 00:51:57.649069 | orchestrator | ok: [testbed-node-0] 2026-01-17 00:51:57.649075 | orchestrator | ok: [testbed-node-1] 2026-01-17 00:51:57.649082 | orchestrator | ok: [testbed-node-2] 2026-01-17 00:51:57.649088 | orchestrator | 2026-01-17 00:51:57.649095 | orchestrator | TASK [k3s_server : Clean previous runs of k3s-init] **************************** 2026-01-17 00:51:57.649101 | orchestrator | Saturday 17 January 2026 00:47:59 +0000 (0:00:01.107) 0:00:31.742 ****** 2026-01-17 00:51:57.649108 | orchestrator | ok: [testbed-node-0] 2026-01-17 00:51:57.649115 | orchestrator | ok: [testbed-node-1] 2026-01-17 00:51:57.649122 | orchestrator | ok: [testbed-node-2] 2026-01-17 00:51:57.649128 | orchestrator | 2026-01-17 00:51:57.649135 | orchestrator | TASK [k3s_server : Deploy K3s http_proxy conf] ********************************* 2026-01-17 00:51:57.649141 | orchestrator | Saturday 17 January 2026 00:48:00 +0000 (0:00:01.181) 0:00:32.923 ****** 2026-01-17 00:51:57.649591 | orchestrator | skipping: [testbed-node-0] 2026-01-17 00:51:57.649615 | orchestrator | skipping: [testbed-node-1] 2026-01-17 00:51:57.649627 | orchestrator | skipping: [testbed-node-2] 2026-01-17 00:51:57.649639 | orchestrator | 2026-01-17 00:51:57.649651 | orchestrator | TASK [k3s_server : Create /etc/rancher/k3s directory] ************************** 2026-01-17 00:51:57.649662 | orchestrator | Saturday 17 January 2026 00:48:00 +0000 (0:00:00.347) 0:00:33.271 ****** 2026-01-17 00:51:57.649669 | orchestrator | changed: [testbed-node-1] 2026-01-17 00:51:57.649676 | orchestrator | changed: [testbed-node-0] 2026-01-17 00:51:57.649682 | orchestrator | changed: [testbed-node-2] 2026-01-17 00:51:57.649689 | orchestrator | 2026-01-17 00:51:57.649695 | orchestrator | TASK [k3s_server : Create custom resolv.conf for k3s] ************************** 2026-01-17 00:51:57.649702 | orchestrator | Saturday 17 January 2026 00:48:01 +0000 (0:00:01.130) 0:00:34.401 ****** 2026-01-17 00:51:57.649709 | orchestrator | changed: [testbed-node-1] 2026-01-17 00:51:57.649715 | orchestrator | changed: [testbed-node-2] 2026-01-17 00:51:57.649722 | orchestrator | changed: [testbed-node-0] 2026-01-17 00:51:57.649729 | orchestrator | 2026-01-17 00:51:57.649736 | orchestrator | TASK [k3s_server : Deploy vip manifest] **************************************** 2026-01-17 00:51:57.649742 | orchestrator | Saturday 17 January 2026 00:48:03 +0000 (0:00:02.160) 0:00:36.562 ****** 2026-01-17 00:51:57.649749 | orchestrator | included: /ansible/roles/k3s_server/tasks/vip.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-17 00:51:57.649756 | orchestrator | 2026-01-17 00:51:57.649763 | orchestrator | TASK [k3s_server : Set _kube_vip_bgp_peers fact] ******************************* 2026-01-17 00:51:57.649770 | orchestrator | Saturday 17 January 2026 00:48:04 +0000 (0:00:00.572) 0:00:37.134 ****** 2026-01-17 00:51:57.649776 | orchestrator | ok: [testbed-node-2] 2026-01-17 00:51:57.649783 | orchestrator | ok: [testbed-node-0] 2026-01-17 00:51:57.649790 | orchestrator | ok: [testbed-node-1] 2026-01-17 00:51:57.649796 | orchestrator | 2026-01-17 00:51:57.649810 | orchestrator | TASK [k3s_server : Create manifests directory on first master] ***************** 2026-01-17 00:51:57.649817 | orchestrator | Saturday 17 January 2026 00:48:07 +0000 (0:00:03.109) 0:00:40.244 ****** 2026-01-17 00:51:57.649824 | orchestrator | skipping: [testbed-node-1] 2026-01-17 00:51:57.649830 | orchestrator | changed: [testbed-node-0] 2026-01-17 00:51:57.649837 | orchestrator | skipping: [testbed-node-2] 2026-01-17 00:51:57.649844 | orchestrator | 2026-01-17 00:51:57.649850 | orchestrator | TASK [k3s_server : Download vip rbac manifest to first master] ***************** 2026-01-17 00:51:57.649857 | orchestrator | Saturday 17 January 2026 00:48:08 +0000 (0:00:00.682) 0:00:40.926 ****** 2026-01-17 00:51:57.649864 | orchestrator | skipping: [testbed-node-1] 2026-01-17 00:51:57.649871 | orchestrator | skipping: [testbed-node-2] 2026-01-17 00:51:57.649877 | orchestrator | changed: [testbed-node-0] 2026-01-17 00:51:57.649884 | orchestrator | 2026-01-17 00:51:57.649891 | orchestrator | TASK [k3s_server : Copy vip manifest to first master] ************************** 2026-01-17 00:51:57.649897 | orchestrator | Saturday 17 January 2026 00:48:09 +0000 (0:00:01.140) 0:00:42.067 ****** 2026-01-17 00:51:57.649904 | orchestrator | skipping: [testbed-node-1] 2026-01-17 00:51:57.649911 | orchestrator | skipping: [testbed-node-2] 2026-01-17 00:51:57.649918 | orchestrator | changed: [testbed-node-0] 2026-01-17 00:51:57.649924 | orchestrator | 2026-01-17 00:51:57.649931 | orchestrator | TASK [k3s_server : Deploy metallb manifest] ************************************ 2026-01-17 00:51:57.649945 | orchestrator | Saturday 17 January 2026 00:48:11 +0000 (0:00:01.679) 0:00:43.747 ****** 2026-01-17 00:51:57.649952 | orchestrator | skipping: [testbed-node-0] 2026-01-17 00:51:57.649958 | orchestrator | skipping: [testbed-node-1] 2026-01-17 00:51:57.649965 | orchestrator | skipping: [testbed-node-2] 2026-01-17 00:51:57.649971 | orchestrator | 2026-01-17 00:51:57.649978 | orchestrator | TASK [k3s_server : Deploy kube-vip manifest] *********************************** 2026-01-17 00:51:57.650143 | orchestrator | Saturday 17 January 2026 00:48:12 +0000 (0:00:01.007) 0:00:44.754 ****** 2026-01-17 00:51:57.650155 | orchestrator | skipping: [testbed-node-0] 2026-01-17 00:51:57.650162 | orchestrator | skipping: [testbed-node-1] 2026-01-17 00:51:57.650169 | orchestrator | skipping: [testbed-node-2] 2026-01-17 00:51:57.650175 | orchestrator | 2026-01-17 00:51:57.650182 | orchestrator | TASK [k3s_server : Init cluster inside the transient k3s-init service] ********* 2026-01-17 00:51:57.650189 | orchestrator | Saturday 17 January 2026 00:48:12 +0000 (0:00:00.543) 0:00:45.298 ****** 2026-01-17 00:51:57.650196 | orchestrator | changed: [testbed-node-0] 2026-01-17 00:51:57.650202 | orchestrator | changed: [testbed-node-2] 2026-01-17 00:51:57.650209 | orchestrator | changed: [testbed-node-1] 2026-01-17 00:51:57.650215 | orchestrator | 2026-01-17 00:51:57.650226 | orchestrator | TASK [k3s_server : Detect Kubernetes version for label compatibility] ********** 2026-01-17 00:51:57.650233 | orchestrator | Saturday 17 January 2026 00:48:14 +0000 (0:00:01.689) 0:00:46.988 ****** 2026-01-17 00:51:57.650240 | orchestrator | ok: [testbed-node-0] 2026-01-17 00:51:57.650247 | orchestrator | ok: [testbed-node-2] 2026-01-17 00:51:57.650254 | orchestrator | ok: [testbed-node-1] 2026-01-17 00:51:57.650260 | orchestrator | 2026-01-17 00:51:57.650267 | orchestrator | TASK [k3s_server : Set node role label selector based on Kubernetes version] *** 2026-01-17 00:51:57.650274 | orchestrator | Saturday 17 January 2026 00:48:17 +0000 (0:00:02.774) 0:00:49.763 ****** 2026-01-17 00:51:57.650281 | orchestrator | ok: [testbed-node-0] 2026-01-17 00:51:57.650287 | orchestrator | ok: [testbed-node-1] 2026-01-17 00:51:57.650294 | orchestrator | ok: [testbed-node-2] 2026-01-17 00:51:57.650301 | orchestrator | 2026-01-17 00:51:57.650307 | orchestrator | TASK [k3s_server : Verify that all nodes actually joined (check k3s-init.service if this fails)] *** 2026-01-17 00:51:57.650314 | orchestrator | Saturday 17 January 2026 00:48:18 +0000 (0:00:01.028) 0:00:50.792 ****** 2026-01-17 00:51:57.650321 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2026-01-17 00:51:57.650328 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2026-01-17 00:51:57.650341 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2026-01-17 00:51:57.650348 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2026-01-17 00:51:57.650355 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2026-01-17 00:51:57.650362 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2026-01-17 00:51:57.650368 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2026-01-17 00:51:57.650401 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2026-01-17 00:51:57.650409 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2026-01-17 00:51:57.650416 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2026-01-17 00:51:57.650423 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2026-01-17 00:51:57.650429 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2026-01-17 00:51:57.650436 | orchestrator | ok: [testbed-node-0] 2026-01-17 00:51:57.650443 | orchestrator | ok: [testbed-node-1] 2026-01-17 00:51:57.650449 | orchestrator | ok: [testbed-node-2] 2026-01-17 00:51:57.650456 | orchestrator | 2026-01-17 00:51:57.650463 | orchestrator | TASK [k3s_server : Save logs of k3s-init.service] ****************************** 2026-01-17 00:51:57.650469 | orchestrator | Saturday 17 January 2026 00:49:01 +0000 (0:00:43.314) 0:01:34.107 ****** 2026-01-17 00:51:57.650476 | orchestrator | skipping: [testbed-node-0] 2026-01-17 00:51:57.650483 | orchestrator | skipping: [testbed-node-1] 2026-01-17 00:51:57.650489 | orchestrator | skipping: [testbed-node-2] 2026-01-17 00:51:57.650496 | orchestrator | 2026-01-17 00:51:57.650502 | orchestrator | TASK [k3s_server : Kill the temporary service used for initialization] ********* 2026-01-17 00:51:57.650509 | orchestrator | Saturday 17 January 2026 00:49:01 +0000 (0:00:00.388) 0:01:34.496 ****** 2026-01-17 00:51:57.650516 | orchestrator | changed: [testbed-node-0] 2026-01-17 00:51:57.650522 | orchestrator | changed: [testbed-node-1] 2026-01-17 00:51:57.650529 | orchestrator | changed: [testbed-node-2] 2026-01-17 00:51:57.650536 | orchestrator | 2026-01-17 00:51:57.650542 | orchestrator | TASK [k3s_server : Copy K3s service file] ************************************** 2026-01-17 00:51:57.650549 | orchestrator | Saturday 17 January 2026 00:49:03 +0000 (0:00:01.289) 0:01:35.786 ****** 2026-01-17 00:51:57.650555 | orchestrator | changed: [testbed-node-1] 2026-01-17 00:51:57.650562 | orchestrator | changed: [testbed-node-0] 2026-01-17 00:51:57.650569 | orchestrator | changed: [testbed-node-2] 2026-01-17 00:51:57.650575 | orchestrator | 2026-01-17 00:51:57.650588 | orchestrator | TASK [k3s_server : Enable and check K3s service] ******************************* 2026-01-17 00:51:57.650596 | orchestrator | Saturday 17 January 2026 00:49:05 +0000 (0:00:01.835) 0:01:37.621 ****** 2026-01-17 00:51:57.650602 | orchestrator | changed: [testbed-node-0] 2026-01-17 00:51:57.650609 | orchestrator | changed: [testbed-node-2] 2026-01-17 00:51:57.650616 | orchestrator | changed: [testbed-node-1] 2026-01-17 00:51:57.650622 | orchestrator | 2026-01-17 00:51:57.650629 | orchestrator | TASK [k3s_server : Wait for node-token] **************************************** 2026-01-17 00:51:57.650636 | orchestrator | Saturday 17 January 2026 00:49:30 +0000 (0:00:24.999) 0:02:02.620 ****** 2026-01-17 00:51:57.650643 | orchestrator | ok: [testbed-node-0] 2026-01-17 00:51:57.650650 | orchestrator | ok: [testbed-node-1] 2026-01-17 00:51:57.650661 | orchestrator | ok: [testbed-node-2] 2026-01-17 00:51:57.650668 | orchestrator | 2026-01-17 00:51:57.650675 | orchestrator | TASK [k3s_server : Register node-token file access mode] *********************** 2026-01-17 00:51:57.650681 | orchestrator | Saturday 17 January 2026 00:49:30 +0000 (0:00:00.712) 0:02:03.333 ****** 2026-01-17 00:51:57.650688 | orchestrator | ok: [testbed-node-0] 2026-01-17 00:51:57.650695 | orchestrator | ok: [testbed-node-1] 2026-01-17 00:51:57.650705 | orchestrator | ok: [testbed-node-2] 2026-01-17 00:51:57.650712 | orchestrator | 2026-01-17 00:51:57.650719 | orchestrator | TASK [k3s_server : Change file access node-token] ****************************** 2026-01-17 00:51:57.650728 | orchestrator | Saturday 17 January 2026 00:49:31 +0000 (0:00:00.653) 0:02:03.986 ****** 2026-01-17 00:51:57.650740 | orchestrator | changed: [testbed-node-0] 2026-01-17 00:51:57.650750 | orchestrator | changed: [testbed-node-1] 2026-01-17 00:51:57.650762 | orchestrator | changed: [testbed-node-2] 2026-01-17 00:51:57.650774 | orchestrator | 2026-01-17 00:51:57.650786 | orchestrator | TASK [k3s_server : Read node-token from master] ******************************** 2026-01-17 00:51:57.650798 | orchestrator | Saturday 17 January 2026 00:49:32 +0000 (0:00:00.715) 0:02:04.702 ****** 2026-01-17 00:51:57.650807 | orchestrator | ok: [testbed-node-0] 2026-01-17 00:51:57.650814 | orchestrator | ok: [testbed-node-1] 2026-01-17 00:51:57.650822 | orchestrator | ok: [testbed-node-2] 2026-01-17 00:51:57.650829 | orchestrator | 2026-01-17 00:51:57.650837 | orchestrator | TASK [k3s_server : Store Master node-token] ************************************ 2026-01-17 00:51:57.650845 | orchestrator | Saturday 17 January 2026 00:49:33 +0000 (0:00:00.917) 0:02:05.619 ****** 2026-01-17 00:51:57.650852 | orchestrator | ok: [testbed-node-0] 2026-01-17 00:51:57.650860 | orchestrator | ok: [testbed-node-1] 2026-01-17 00:51:57.650868 | orchestrator | ok: [testbed-node-2] 2026-01-17 00:51:57.650875 | orchestrator | 2026-01-17 00:51:57.650883 | orchestrator | TASK [k3s_server : Restore node-token file access] ***************************** 2026-01-17 00:51:57.650891 | orchestrator | Saturday 17 January 2026 00:49:33 +0000 (0:00:00.329) 0:02:05.950 ****** 2026-01-17 00:51:57.650899 | orchestrator | changed: [testbed-node-0] 2026-01-17 00:51:57.650906 | orchestrator | changed: [testbed-node-1] 2026-01-17 00:51:57.650914 | orchestrator | changed: [testbed-node-2] 2026-01-17 00:51:57.650922 | orchestrator | 2026-01-17 00:51:57.650930 | orchestrator | TASK [k3s_server : Create directory .kube] ************************************* 2026-01-17 00:51:57.650938 | orchestrator | Saturday 17 January 2026 00:49:33 +0000 (0:00:00.604) 0:02:06.554 ****** 2026-01-17 00:51:57.650945 | orchestrator | changed: [testbed-node-0] 2026-01-17 00:51:57.650953 | orchestrator | changed: [testbed-node-1] 2026-01-17 00:51:57.650960 | orchestrator | changed: [testbed-node-2] 2026-01-17 00:51:57.650968 | orchestrator | 2026-01-17 00:51:57.650976 | orchestrator | TASK [k3s_server : Copy config file to user home directory] ******************** 2026-01-17 00:51:57.650983 | orchestrator | Saturday 17 January 2026 00:49:34 +0000 (0:00:00.612) 0:02:07.166 ****** 2026-01-17 00:51:57.650991 | orchestrator | changed: [testbed-node-0] 2026-01-17 00:51:57.650998 | orchestrator | changed: [testbed-node-1] 2026-01-17 00:51:57.651006 | orchestrator | changed: [testbed-node-2] 2026-01-17 00:51:57.651014 | orchestrator | 2026-01-17 00:51:57.651022 | orchestrator | TASK [k3s_server : Configure kubectl cluster to https://192.168.16.8:6443] ***** 2026-01-17 00:51:57.651029 | orchestrator | Saturday 17 January 2026 00:49:35 +0000 (0:00:01.133) 0:02:08.300 ****** 2026-01-17 00:51:57.651037 | orchestrator | changed: [testbed-node-0] 2026-01-17 00:51:57.651044 | orchestrator | changed: [testbed-node-1] 2026-01-17 00:51:57.651052 | orchestrator | changed: [testbed-node-2] 2026-01-17 00:51:57.651059 | orchestrator | 2026-01-17 00:51:57.651067 | orchestrator | TASK [k3s_server : Create kubectl symlink] ************************************* 2026-01-17 00:51:57.651075 | orchestrator | Saturday 17 January 2026 00:49:36 +0000 (0:00:00.958) 0:02:09.259 ****** 2026-01-17 00:51:57.651086 | orchestrator | skipping: [testbed-node-0] 2026-01-17 00:51:57.651096 | orchestrator | skipping: [testbed-node-1] 2026-01-17 00:51:57.651104 | orchestrator | skipping: [testbed-node-2] 2026-01-17 00:51:57.651116 | orchestrator | 2026-01-17 00:51:57.651123 | orchestrator | TASK [k3s_server : Create crictl symlink] ************************************** 2026-01-17 00:51:57.651131 | orchestrator | Saturday 17 January 2026 00:49:37 +0000 (0:00:00.335) 0:02:09.594 ****** 2026-01-17 00:51:57.651139 | orchestrator | skipping: [testbed-node-0] 2026-01-17 00:51:57.651147 | orchestrator | skipping: [testbed-node-1] 2026-01-17 00:51:57.651155 | orchestrator | skipping: [testbed-node-2] 2026-01-17 00:51:57.651162 | orchestrator | 2026-01-17 00:51:57.651168 | orchestrator | TASK [k3s_server : Get contents of manifests folder] *************************** 2026-01-17 00:51:57.651175 | orchestrator | Saturday 17 January 2026 00:49:37 +0000 (0:00:00.318) 0:02:09.912 ****** 2026-01-17 00:51:57.651182 | orchestrator | ok: [testbed-node-1] 2026-01-17 00:51:57.651188 | orchestrator | ok: [testbed-node-0] 2026-01-17 00:51:57.651195 | orchestrator | ok: [testbed-node-2] 2026-01-17 00:51:57.651201 | orchestrator | 2026-01-17 00:51:57.651208 | orchestrator | TASK [k3s_server : Get sub dirs of manifests folder] *************************** 2026-01-17 00:51:57.651215 | orchestrator | Saturday 17 January 2026 00:49:38 +0000 (0:00:00.965) 0:02:10.878 ****** 2026-01-17 00:51:57.651221 | orchestrator | ok: [testbed-node-0] 2026-01-17 00:51:57.651228 | orchestrator | ok: [testbed-node-1] 2026-01-17 00:51:57.651234 | orchestrator | ok: [testbed-node-2] 2026-01-17 00:51:57.651241 | orchestrator | 2026-01-17 00:51:57.651248 | orchestrator | TASK [k3s_server : Remove manifests and folders that are only needed for bootstrapping cluster so k3s doesn't auto apply on start] *** 2026-01-17 00:51:57.651254 | orchestrator | Saturday 17 January 2026 00:49:38 +0000 (0:00:00.693) 0:02:11.572 ****** 2026-01-17 00:51:57.651261 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2026-01-17 00:51:57.651273 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2026-01-17 00:51:57.651280 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2026-01-17 00:51:57.651287 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2026-01-17 00:51:57.651294 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2026-01-17 00:51:57.651302 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2026-01-17 00:51:57.651309 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2026-01-17 00:51:57.651316 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2026-01-17 00:51:57.651327 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2026-01-17 00:51:57.651334 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/vip.yaml) 2026-01-17 00:51:57.651341 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2026-01-17 00:51:57.651349 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2026-01-17 00:51:57.651356 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/vip-rbac.yaml) 2026-01-17 00:51:57.651363 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2026-01-17 00:51:57.651370 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2026-01-17 00:51:57.651392 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2026-01-17 00:51:57.651399 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2026-01-17 00:51:57.651407 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2026-01-17 00:51:57.651414 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2026-01-17 00:51:57.651421 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2026-01-17 00:51:57.651433 | orchestrator | 2026-01-17 00:51:57.651441 | orchestrator | PLAY [Deploy k3s worker nodes] ************************************************* 2026-01-17 00:51:57.651448 | orchestrator | 2026-01-17 00:51:57.651455 | orchestrator | TASK [k3s_agent : Validating arguments against arg spec 'main' - Setup k3s agents] *** 2026-01-17 00:51:57.651463 | orchestrator | Saturday 17 January 2026 00:49:42 +0000 (0:00:03.376) 0:02:14.949 ****** 2026-01-17 00:51:57.651470 | orchestrator | ok: [testbed-node-3] 2026-01-17 00:51:57.651477 | orchestrator | ok: [testbed-node-4] 2026-01-17 00:51:57.651484 | orchestrator | ok: [testbed-node-5] 2026-01-17 00:51:57.651491 | orchestrator | 2026-01-17 00:51:57.651499 | orchestrator | TASK [k3s_agent : Check if system is PXE-booted] ******************************* 2026-01-17 00:51:57.651506 | orchestrator | Saturday 17 January 2026 00:49:42 +0000 (0:00:00.607) 0:02:15.556 ****** 2026-01-17 00:51:57.651513 | orchestrator | ok: [testbed-node-3] 2026-01-17 00:51:57.651520 | orchestrator | ok: [testbed-node-4] 2026-01-17 00:51:57.651527 | orchestrator | ok: [testbed-node-5] 2026-01-17 00:51:57.651534 | orchestrator | 2026-01-17 00:51:57.651542 | orchestrator | TASK [k3s_agent : Set fact for PXE-booted system] ****************************** 2026-01-17 00:51:57.651549 | orchestrator | Saturday 17 January 2026 00:49:43 +0000 (0:00:00.665) 0:02:16.221 ****** 2026-01-17 00:51:57.651556 | orchestrator | ok: [testbed-node-3] 2026-01-17 00:51:57.651563 | orchestrator | ok: [testbed-node-4] 2026-01-17 00:51:57.651570 | orchestrator | ok: [testbed-node-5] 2026-01-17 00:51:57.651577 | orchestrator | 2026-01-17 00:51:57.651585 | orchestrator | TASK [k3s_agent : Include http_proxy configuration tasks] ********************** 2026-01-17 00:51:57.651592 | orchestrator | Saturday 17 January 2026 00:49:43 +0000 (0:00:00.331) 0:02:16.553 ****** 2026-01-17 00:51:57.651599 | orchestrator | included: /ansible/roles/k3s_agent/tasks/http_proxy.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-17 00:51:57.651606 | orchestrator | 2026-01-17 00:51:57.651613 | orchestrator | TASK [k3s_agent : Create k3s-node.service.d directory] ************************* 2026-01-17 00:51:57.651621 | orchestrator | Saturday 17 January 2026 00:49:44 +0000 (0:00:00.613) 0:02:17.167 ****** 2026-01-17 00:51:57.651628 | orchestrator | skipping: [testbed-node-3] 2026-01-17 00:51:57.651635 | orchestrator | skipping: [testbed-node-4] 2026-01-17 00:51:57.651642 | orchestrator | skipping: [testbed-node-5] 2026-01-17 00:51:57.651650 | orchestrator | 2026-01-17 00:51:57.651657 | orchestrator | TASK [k3s_agent : Copy K3s http_proxy conf file] ******************************* 2026-01-17 00:51:57.651664 | orchestrator | Saturday 17 January 2026 00:49:44 +0000 (0:00:00.265) 0:02:17.433 ****** 2026-01-17 00:51:57.651671 | orchestrator | skipping: [testbed-node-3] 2026-01-17 00:51:57.651678 | orchestrator | skipping: [testbed-node-4] 2026-01-17 00:51:57.651686 | orchestrator | skipping: [testbed-node-5] 2026-01-17 00:51:57.651693 | orchestrator | 2026-01-17 00:51:57.651700 | orchestrator | TASK [k3s_agent : Deploy K3s http_proxy conf] ********************************** 2026-01-17 00:51:57.651707 | orchestrator | Saturday 17 January 2026 00:49:45 +0000 (0:00:00.262) 0:02:17.695 ****** 2026-01-17 00:51:57.651714 | orchestrator | skipping: [testbed-node-3] 2026-01-17 00:51:57.651721 | orchestrator | skipping: [testbed-node-4] 2026-01-17 00:51:57.651729 | orchestrator | skipping: [testbed-node-5] 2026-01-17 00:51:57.651736 | orchestrator | 2026-01-17 00:51:57.651743 | orchestrator | TASK [k3s_agent : Create /etc/rancher/k3s directory] *************************** 2026-01-17 00:51:57.651750 | orchestrator | Saturday 17 January 2026 00:49:45 +0000 (0:00:00.260) 0:02:17.956 ****** 2026-01-17 00:51:57.651757 | orchestrator | changed: [testbed-node-3] 2026-01-17 00:51:57.651764 | orchestrator | changed: [testbed-node-4] 2026-01-17 00:51:57.651771 | orchestrator | changed: [testbed-node-5] 2026-01-17 00:51:57.651778 | orchestrator | 2026-01-17 00:51:57.651790 | orchestrator | TASK [k3s_agent : Create custom resolv.conf for k3s] *************************** 2026-01-17 00:51:57.651798 | orchestrator | Saturday 17 January 2026 00:49:46 +0000 (0:00:00.727) 0:02:18.684 ****** 2026-01-17 00:51:57.651805 | orchestrator | changed: [testbed-node-4] 2026-01-17 00:51:57.651889 | orchestrator | changed: [testbed-node-3] 2026-01-17 00:51:57.651905 | orchestrator | changed: [testbed-node-5] 2026-01-17 00:51:57.651913 | orchestrator | 2026-01-17 00:51:57.651920 | orchestrator | TASK [k3s_agent : Configure the k3s service] *********************************** 2026-01-17 00:51:57.651927 | orchestrator | Saturday 17 January 2026 00:49:47 +0000 (0:00:01.012) 0:02:19.697 ****** 2026-01-17 00:51:57.651935 | orchestrator | changed: [testbed-node-3] 2026-01-17 00:51:57.651942 | orchestrator | changed: [testbed-node-4] 2026-01-17 00:51:57.651949 | orchestrator | changed: [testbed-node-5] 2026-01-17 00:51:57.651956 | orchestrator | 2026-01-17 00:51:57.651963 | orchestrator | TASK [k3s_agent : Manage k3s service] ****************************************** 2026-01-17 00:51:57.651971 | orchestrator | Saturday 17 January 2026 00:49:48 +0000 (0:00:01.274) 0:02:20.971 ****** 2026-01-17 00:51:57.651978 | orchestrator | changed: [testbed-node-4] 2026-01-17 00:51:57.651992 | orchestrator | changed: [testbed-node-5] 2026-01-17 00:51:57.652000 | orchestrator | changed: [testbed-node-3] 2026-01-17 00:51:57.652007 | orchestrator | 2026-01-17 00:51:57.652014 | orchestrator | PLAY [Prepare kubeconfig file] ************************************************* 2026-01-17 00:51:57.652022 | orchestrator | 2026-01-17 00:51:57.652029 | orchestrator | TASK [Get home directory of operator user] ************************************* 2026-01-17 00:51:57.652036 | orchestrator | Saturday 17 January 2026 00:49:58 +0000 (0:00:09.846) 0:02:30.818 ****** 2026-01-17 00:51:57.652044 | orchestrator | ok: [testbed-manager] 2026-01-17 00:51:57.652051 | orchestrator | 2026-01-17 00:51:57.652058 | orchestrator | TASK [Create .kube directory] ************************************************** 2026-01-17 00:51:57.652065 | orchestrator | Saturday 17 January 2026 00:49:59 +0000 (0:00:00.829) 0:02:31.648 ****** 2026-01-17 00:51:57.652073 | orchestrator | changed: [testbed-manager] 2026-01-17 00:51:57.652080 | orchestrator | 2026-01-17 00:51:57.652088 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2026-01-17 00:51:57.652095 | orchestrator | Saturday 17 January 2026 00:49:59 +0000 (0:00:00.514) 0:02:32.162 ****** 2026-01-17 00:51:57.652102 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2026-01-17 00:51:57.652110 | orchestrator | 2026-01-17 00:51:57.652117 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2026-01-17 00:51:57.652124 | orchestrator | Saturday 17 January 2026 00:50:00 +0000 (0:00:00.631) 0:02:32.794 ****** 2026-01-17 00:51:57.652131 | orchestrator | changed: [testbed-manager] 2026-01-17 00:51:57.652139 | orchestrator | 2026-01-17 00:51:57.652146 | orchestrator | TASK [Change server address in the kubeconfig] ********************************* 2026-01-17 00:51:57.652153 | orchestrator | Saturday 17 January 2026 00:50:01 +0000 (0:00:00.961) 0:02:33.755 ****** 2026-01-17 00:51:57.652161 | orchestrator | changed: [testbed-manager] 2026-01-17 00:51:57.652168 | orchestrator | 2026-01-17 00:51:57.652175 | orchestrator | TASK [Make kubeconfig available for use inside the manager service] ************ 2026-01-17 00:51:57.652183 | orchestrator | Saturday 17 January 2026 00:50:01 +0000 (0:00:00.623) 0:02:34.379 ****** 2026-01-17 00:51:57.652190 | orchestrator | changed: [testbed-manager -> localhost] 2026-01-17 00:51:57.652197 | orchestrator | 2026-01-17 00:51:57.652205 | orchestrator | TASK [Change server address in the kubeconfig inside the manager service] ****** 2026-01-17 00:51:57.652212 | orchestrator | Saturday 17 January 2026 00:50:03 +0000 (0:00:01.924) 0:02:36.303 ****** 2026-01-17 00:51:57.652219 | orchestrator | changed: [testbed-manager -> localhost] 2026-01-17 00:51:57.652226 | orchestrator | 2026-01-17 00:51:57.652234 | orchestrator | TASK [Set KUBECONFIG environment variable] ************************************* 2026-01-17 00:51:57.652241 | orchestrator | Saturday 17 January 2026 00:50:04 +0000 (0:00:00.761) 0:02:37.065 ****** 2026-01-17 00:51:57.652248 | orchestrator | changed: [testbed-manager] 2026-01-17 00:51:57.652256 | orchestrator | 2026-01-17 00:51:57.652263 | orchestrator | TASK [Enable kubectl command line completion] ********************************** 2026-01-17 00:51:57.652270 | orchestrator | Saturday 17 January 2026 00:50:04 +0000 (0:00:00.440) 0:02:37.506 ****** 2026-01-17 00:51:57.652277 | orchestrator | changed: [testbed-manager] 2026-01-17 00:51:57.652285 | orchestrator | 2026-01-17 00:51:57.652292 | orchestrator | PLAY [Apply role kubectl] ****************************************************** 2026-01-17 00:51:57.652304 | orchestrator | 2026-01-17 00:51:57.652311 | orchestrator | TASK [kubectl : Gather variables for each operating system] ******************** 2026-01-17 00:51:57.652318 | orchestrator | Saturday 17 January 2026 00:50:05 +0000 (0:00:00.706) 0:02:38.212 ****** 2026-01-17 00:51:57.652326 | orchestrator | ok: [testbed-manager] 2026-01-17 00:51:57.652333 | orchestrator | 2026-01-17 00:51:57.652340 | orchestrator | TASK [kubectl : Include distribution specific install tasks] ******************* 2026-01-17 00:51:57.652347 | orchestrator | Saturday 17 January 2026 00:50:05 +0000 (0:00:00.152) 0:02:38.364 ****** 2026-01-17 00:51:57.652355 | orchestrator | included: /ansible/roles/kubectl/tasks/install-Debian-family.yml for testbed-manager 2026-01-17 00:51:57.652362 | orchestrator | 2026-01-17 00:51:57.652369 | orchestrator | TASK [kubectl : Remove old architecture-dependent repository] ****************** 2026-01-17 00:51:57.652390 | orchestrator | Saturday 17 January 2026 00:50:06 +0000 (0:00:00.216) 0:02:38.580 ****** 2026-01-17 00:51:57.652398 | orchestrator | ok: [testbed-manager] 2026-01-17 00:51:57.652405 | orchestrator | 2026-01-17 00:51:57.652413 | orchestrator | TASK [kubectl : Install apt-transport-https package] *************************** 2026-01-17 00:51:57.652420 | orchestrator | Saturday 17 January 2026 00:50:06 +0000 (0:00:00.752) 0:02:39.332 ****** 2026-01-17 00:51:57.652427 | orchestrator | ok: [testbed-manager] 2026-01-17 00:51:57.652435 | orchestrator | 2026-01-17 00:51:57.652442 | orchestrator | TASK [kubectl : Add repository gpg key] **************************************** 2026-01-17 00:51:57.652449 | orchestrator | Saturday 17 January 2026 00:50:08 +0000 (0:00:01.602) 0:02:40.935 ****** 2026-01-17 00:51:57.652457 | orchestrator | changed: [testbed-manager] 2026-01-17 00:51:57.652464 | orchestrator | 2026-01-17 00:51:57.652471 | orchestrator | TASK [kubectl : Set permissions of gpg key] ************************************ 2026-01-17 00:51:57.652479 | orchestrator | Saturday 17 January 2026 00:50:09 +0000 (0:00:00.953) 0:02:41.889 ****** 2026-01-17 00:51:57.652486 | orchestrator | ok: [testbed-manager] 2026-01-17 00:51:57.652493 | orchestrator | 2026-01-17 00:51:57.652506 | orchestrator | TASK [kubectl : Add repository Debian] ***************************************** 2026-01-17 00:51:57.652514 | orchestrator | Saturday 17 January 2026 00:50:09 +0000 (0:00:00.503) 0:02:42.392 ****** 2026-01-17 00:51:57.652521 | orchestrator | changed: [testbed-manager] 2026-01-17 00:51:57.652530 | orchestrator | 2026-01-17 00:51:57.652538 | orchestrator | TASK [kubectl : Install required packages] ************************************* 2026-01-17 00:51:57.652546 | orchestrator | Saturday 17 January 2026 00:50:19 +0000 (0:00:09.435) 0:02:51.827 ****** 2026-01-17 00:51:57.652555 | orchestrator | changed: [testbed-manager] 2026-01-17 00:51:57.652563 | orchestrator | 2026-01-17 00:51:57.652571 | orchestrator | TASK [kubectl : Remove kubectl symlink] **************************************** 2026-01-17 00:51:57.652580 | orchestrator | Saturday 17 January 2026 00:50:32 +0000 (0:00:13.608) 0:03:05.435 ****** 2026-01-17 00:51:57.652588 | orchestrator | ok: [testbed-manager] 2026-01-17 00:51:57.652596 | orchestrator | 2026-01-17 00:51:57.652604 | orchestrator | PLAY [Run post actions on master nodes] **************************************** 2026-01-17 00:51:57.652612 | orchestrator | 2026-01-17 00:51:57.652625 | orchestrator | TASK [k3s_server_post : Validating arguments against arg spec 'main' - Configure k3s cluster] *** 2026-01-17 00:51:57.652633 | orchestrator | Saturday 17 January 2026 00:50:33 +0000 (0:00:00.574) 0:03:06.010 ****** 2026-01-17 00:51:57.652641 | orchestrator | ok: [testbed-node-0] 2026-01-17 00:51:57.652649 | orchestrator | ok: [testbed-node-1] 2026-01-17 00:51:57.652657 | orchestrator | ok: [testbed-node-2] 2026-01-17 00:51:57.652666 | orchestrator | 2026-01-17 00:51:57.652674 | orchestrator | TASK [k3s_server_post : Deploy calico] ***************************************** 2026-01-17 00:51:57.652683 | orchestrator | Saturday 17 January 2026 00:50:33 +0000 (0:00:00.348) 0:03:06.359 ****** 2026-01-17 00:51:57.652691 | orchestrator | skipping: [testbed-node-0] 2026-01-17 00:51:57.652699 | orchestrator | skipping: [testbed-node-1] 2026-01-17 00:51:57.652708 | orchestrator | skipping: [testbed-node-2] 2026-01-17 00:51:57.652716 | orchestrator | 2026-01-17 00:51:57.652724 | orchestrator | TASK [k3s_server_post : Deploy cilium] ***************************************** 2026-01-17 00:51:57.652737 | orchestrator | Saturday 17 January 2026 00:50:34 +0000 (0:00:00.324) 0:03:06.683 ****** 2026-01-17 00:51:57.652745 | orchestrator | included: /ansible/roles/k3s_server_post/tasks/cilium.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-17 00:51:57.652753 | orchestrator | 2026-01-17 00:51:57.652761 | orchestrator | TASK [k3s_server_post : Create tmp directory on first master] ****************** 2026-01-17 00:51:57.652768 | orchestrator | Saturday 17 January 2026 00:50:34 +0000 (0:00:00.869) 0:03:07.552 ****** 2026-01-17 00:51:57.652776 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-01-17 00:51:57.652783 | orchestrator | 2026-01-17 00:51:57.652790 | orchestrator | TASK [k3s_server_post : Wait for connectivity to kube VIP] ********************* 2026-01-17 00:51:57.652798 | orchestrator | Saturday 17 January 2026 00:50:35 +0000 (0:00:00.976) 0:03:08.528 ****** 2026-01-17 00:51:57.652805 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-01-17 00:51:57.652812 | orchestrator | 2026-01-17 00:51:57.652820 | orchestrator | TASK [k3s_server_post : Fail if kube VIP not reachable] ************************ 2026-01-17 00:51:57.652827 | orchestrator | Saturday 17 January 2026 00:50:37 +0000 (0:00:01.050) 0:03:09.579 ****** 2026-01-17 00:51:57.652834 | orchestrator | skipping: [testbed-node-0] 2026-01-17 00:51:57.652842 | orchestrator | 2026-01-17 00:51:57.652849 | orchestrator | TASK [k3s_server_post : Test for existing Cilium install] ********************** 2026-01-17 00:51:57.652856 | orchestrator | Saturday 17 January 2026 00:50:37 +0000 (0:00:00.316) 0:03:09.896 ****** 2026-01-17 00:51:57.652864 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-01-17 00:51:57.652871 | orchestrator | 2026-01-17 00:51:57.652878 | orchestrator | TASK [k3s_server_post : Check Cilium version] ********************************** 2026-01-17 00:51:57.652885 | orchestrator | Saturday 17 January 2026 00:50:38 +0000 (0:00:01.016) 0:03:10.913 ****** 2026-01-17 00:51:57.652893 | orchestrator | skipping: [testbed-node-0] 2026-01-17 00:51:57.652900 | orchestrator | 2026-01-17 00:51:57.652908 | orchestrator | TASK [k3s_server_post : Parse installed Cilium version] ************************ 2026-01-17 00:51:57.652915 | orchestrator | Saturday 17 January 2026 00:50:38 +0000 (0:00:00.157) 0:03:11.071 ****** 2026-01-17 00:51:57.652922 | orchestrator | skipping: [testbed-node-0] 2026-01-17 00:51:57.652930 | orchestrator | 2026-01-17 00:51:57.652937 | orchestrator | TASK [k3s_server_post : Determine if Cilium needs update] ********************** 2026-01-17 00:51:57.652944 | orchestrator | Saturday 17 January 2026 00:50:38 +0000 (0:00:00.178) 0:03:11.249 ****** 2026-01-17 00:51:57.652951 | orchestrator | skipping: [testbed-node-0] 2026-01-17 00:51:57.652959 | orchestrator | 2026-01-17 00:51:57.652966 | orchestrator | TASK [k3s_server_post : Log result] ******************************************** 2026-01-17 00:51:57.652973 | orchestrator | Saturday 17 January 2026 00:50:38 +0000 (0:00:00.126) 0:03:11.376 ****** 2026-01-17 00:51:57.652981 | orchestrator | skipping: [testbed-node-0] 2026-01-17 00:51:57.652988 | orchestrator | 2026-01-17 00:51:57.652995 | orchestrator | TASK [k3s_server_post : Install Cilium] **************************************** 2026-01-17 00:51:57.653002 | orchestrator | Saturday 17 January 2026 00:50:38 +0000 (0:00:00.144) 0:03:11.520 ****** 2026-01-17 00:51:57.653010 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-01-17 00:51:57.653017 | orchestrator | 2026-01-17 00:51:57.653088 | orchestrator | TASK [k3s_server_post : Wait for Cilium resources] ***************************** 2026-01-17 00:51:57.653097 | orchestrator | Saturday 17 January 2026 00:50:43 +0000 (0:00:04.747) 0:03:16.268 ****** 2026-01-17 00:51:57.653105 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/cilium-operator) 2026-01-17 00:51:57.653112 | orchestrator | FAILED - RETRYING: [testbed-node-0 -> localhost]: Wait for Cilium resources (30 retries left). 2026-01-17 00:51:57.653119 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=daemonset/cilium) 2026-01-17 00:51:57.653126 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/hubble-relay) 2026-01-17 00:51:57.653133 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/hubble-ui) 2026-01-17 00:51:57.653141 | orchestrator | 2026-01-17 00:51:57.653148 | orchestrator | TASK [k3s_server_post : Set _cilium_bgp_neighbors fact] ************************ 2026-01-17 00:51:57.653161 | orchestrator | Saturday 17 January 2026 00:51:26 +0000 (0:00:42.442) 0:03:58.711 ****** 2026-01-17 00:51:57.653173 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-01-17 00:51:57.653181 | orchestrator | 2026-01-17 00:51:57.653188 | orchestrator | TASK [k3s_server_post : Copy BGP manifests to first master] ******************** 2026-01-17 00:51:57.653196 | orchestrator | Saturday 17 January 2026 00:51:27 +0000 (0:00:01.314) 0:04:00.025 ****** 2026-01-17 00:51:57.653204 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-01-17 00:51:57.653211 | orchestrator | 2026-01-17 00:51:57.653218 | orchestrator | TASK [k3s_server_post : Apply BGP manifests] *********************************** 2026-01-17 00:51:57.653225 | orchestrator | Saturday 17 January 2026 00:51:28 +0000 (0:00:01.469) 0:04:01.495 ****** 2026-01-17 00:51:57.653233 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-01-17 00:51:57.653240 | orchestrator | 2026-01-17 00:51:57.653247 | orchestrator | TASK [k3s_server_post : Print error message if BGP manifests application fails] *** 2026-01-17 00:51:57.653254 | orchestrator | Saturday 17 January 2026 00:51:29 +0000 (0:00:00.889) 0:04:02.384 ****** 2026-01-17 00:51:57.653261 | orchestrator | skipping: [testbed-node-0] 2026-01-17 00:51:57.653269 | orchestrator | 2026-01-17 00:51:57.653280 | orchestrator | TASK [k3s_server_post : Test for BGP config resources] ************************* 2026-01-17 00:51:57.653288 | orchestrator | Saturday 17 January 2026 00:51:29 +0000 (0:00:00.101) 0:04:02.485 ****** 2026-01-17 00:51:57.653295 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=kubectl get CiliumBGPPeeringPolicy.cilium.io) 2026-01-17 00:51:57.653303 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=kubectl get CiliumLoadBalancerIPPool.cilium.io) 2026-01-17 00:51:57.653310 | orchestrator | 2026-01-17 00:51:57.653317 | orchestrator | TASK [k3s_server_post : Deploy metallb pool] *********************************** 2026-01-17 00:51:57.653324 | orchestrator | Saturday 17 January 2026 00:51:31 +0000 (0:00:01.669) 0:04:04.155 ****** 2026-01-17 00:51:57.653332 | orchestrator | skipping: [testbed-node-0] 2026-01-17 00:51:57.653339 | orchestrator | skipping: [testbed-node-1] 2026-01-17 00:51:57.653346 | orchestrator | skipping: [testbed-node-2] 2026-01-17 00:51:57.653353 | orchestrator | 2026-01-17 00:51:57.653361 | orchestrator | TASK [k3s_server_post : Remove tmp directory used for manifests] *************** 2026-01-17 00:51:57.653368 | orchestrator | Saturday 17 January 2026 00:51:31 +0000 (0:00:00.277) 0:04:04.433 ****** 2026-01-17 00:51:57.653414 | orchestrator | ok: [testbed-node-0] 2026-01-17 00:51:57.653423 | orchestrator | ok: [testbed-node-1] 2026-01-17 00:51:57.653431 | orchestrator | ok: [testbed-node-2] 2026-01-17 00:51:57.653438 | orchestrator | 2026-01-17 00:51:57.653445 | orchestrator | PLAY [Apply role k9s] ********************************************************** 2026-01-17 00:51:57.653452 | orchestrator | 2026-01-17 00:51:57.653459 | orchestrator | TASK [k9s : Gather variables for each operating system] ************************ 2026-01-17 00:51:57.653467 | orchestrator | Saturday 17 January 2026 00:51:32 +0000 (0:00:00.996) 0:04:05.430 ****** 2026-01-17 00:51:57.653474 | orchestrator | ok: [testbed-manager] 2026-01-17 00:51:57.653481 | orchestrator | 2026-01-17 00:51:57.653488 | orchestrator | TASK [k9s : Include distribution specific install tasks] *********************** 2026-01-17 00:51:57.653495 | orchestrator | Saturday 17 January 2026 00:51:32 +0000 (0:00:00.142) 0:04:05.572 ****** 2026-01-17 00:51:57.653503 | orchestrator | included: /ansible/roles/k9s/tasks/install-Debian-family.yml for testbed-manager 2026-01-17 00:51:57.653510 | orchestrator | 2026-01-17 00:51:57.653517 | orchestrator | TASK [k9s : Install k9s packages] ********************************************** 2026-01-17 00:51:57.653524 | orchestrator | Saturday 17 January 2026 00:51:33 +0000 (0:00:00.192) 0:04:05.765 ****** 2026-01-17 00:51:57.653531 | orchestrator | changed: [testbed-manager] 2026-01-17 00:51:57.653538 | orchestrator | 2026-01-17 00:51:57.653545 | orchestrator | PLAY [Manage labels, annotations, and taints on all k3s nodes] ***************** 2026-01-17 00:51:57.653553 | orchestrator | 2026-01-17 00:51:57.653560 | orchestrator | TASK [Merge labels, annotations, and taints] *********************************** 2026-01-17 00:51:57.653567 | orchestrator | Saturday 17 January 2026 00:51:39 +0000 (0:00:06.198) 0:04:11.963 ****** 2026-01-17 00:51:57.653579 | orchestrator | ok: [testbed-node-3] 2026-01-17 00:51:57.653586 | orchestrator | ok: [testbed-node-4] 2026-01-17 00:51:57.653594 | orchestrator | ok: [testbed-node-5] 2026-01-17 00:51:57.653601 | orchestrator | ok: [testbed-node-0] 2026-01-17 00:51:57.653608 | orchestrator | ok: [testbed-node-1] 2026-01-17 00:51:57.653615 | orchestrator | ok: [testbed-node-2] 2026-01-17 00:51:57.653622 | orchestrator | 2026-01-17 00:51:57.653629 | orchestrator | TASK [Manage labels] *********************************************************** 2026-01-17 00:51:57.653637 | orchestrator | Saturday 17 January 2026 00:51:40 +0000 (0:00:01.058) 0:04:13.022 ****** 2026-01-17 00:51:57.653644 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2026-01-17 00:51:57.653651 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2026-01-17 00:51:57.653658 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2026-01-17 00:51:57.653665 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2026-01-17 00:51:57.653672 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2026-01-17 00:51:57.653679 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2026-01-17 00:51:57.653687 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2026-01-17 00:51:57.653694 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=openstack-control-plane=enabled) 2026-01-17 00:51:57.653701 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=openstack-control-plane=enabled) 2026-01-17 00:51:57.653708 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=openstack-control-plane=enabled) 2026-01-17 00:51:57.653715 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2026-01-17 00:51:57.653722 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2026-01-17 00:51:57.653736 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2026-01-17 00:51:57.653744 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2026-01-17 00:51:57.653753 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2026-01-17 00:51:57.653762 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2026-01-17 00:51:57.653770 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2026-01-17 00:51:57.653778 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2026-01-17 00:51:57.653789 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2026-01-17 00:51:57.653802 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2026-01-17 00:51:57.653812 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2026-01-17 00:51:57.653824 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2026-01-17 00:51:57.653833 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2026-01-17 00:51:57.653841 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2026-01-17 00:51:57.653849 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2026-01-17 00:51:57.653857 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2026-01-17 00:51:57.653866 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2026-01-17 00:51:57.653874 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2026-01-17 00:51:57.653882 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2026-01-17 00:51:57.653896 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2026-01-17 00:51:57.653904 | orchestrator | 2026-01-17 00:51:57.653912 | orchestrator | TASK [Manage annotations] ****************************************************** 2026-01-17 00:51:57.653921 | orchestrator | Saturday 17 January 2026 00:51:53 +0000 (0:00:13.303) 0:04:26.325 ****** 2026-01-17 00:51:57.653929 | orchestrator | skipping: [testbed-node-3] 2026-01-17 00:51:57.653938 | orchestrator | skipping: [testbed-node-4] 2026-01-17 00:51:57.653946 | orchestrator | skipping: [testbed-node-5] 2026-01-17 00:51:57.653954 | orchestrator | skipping: [testbed-node-0] 2026-01-17 00:51:57.653963 | orchestrator | skipping: [testbed-node-1] 2026-01-17 00:51:57.653970 | orchestrator | skipping: [testbed-node-2] 2026-01-17 00:51:57.653979 | orchestrator | 2026-01-17 00:51:57.653987 | orchestrator | TASK [Manage taints] *********************************************************** 2026-01-17 00:51:57.653995 | orchestrator | Saturday 17 January 2026 00:51:54 +0000 (0:00:00.538) 0:04:26.864 ****** 2026-01-17 00:51:57.654004 | orchestrator | skipping: [testbed-node-3] 2026-01-17 00:51:57.654034 | orchestrator | skipping: [testbed-node-4] 2026-01-17 00:51:57.654045 | orchestrator | skipping: [testbed-node-5] 2026-01-17 00:51:57.654054 | orchestrator | skipping: [testbed-node-0] 2026-01-17 00:51:57.654062 | orchestrator | skipping: [testbed-node-1] 2026-01-17 00:51:57.654071 | orchestrator | skipping: [testbed-node-2] 2026-01-17 00:51:57.654079 | orchestrator | 2026-01-17 00:51:57.654087 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-17 00:51:57.654095 | orchestrator | testbed-manager : ok=21  changed=11  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-17 00:51:57.654104 | orchestrator | testbed-node-0 : ok=50  changed=23  unreachable=0 failed=0 skipped=28  rescued=0 ignored=0 2026-01-17 00:51:57.654111 | orchestrator | testbed-node-1 : ok=38  changed=16  unreachable=0 failed=0 skipped=25  rescued=0 ignored=0 2026-01-17 00:51:57.654118 | orchestrator | testbed-node-2 : ok=38  changed=16  unreachable=0 failed=0 skipped=25  rescued=0 ignored=0 2026-01-17 00:51:57.654126 | orchestrator | testbed-node-3 : ok=16  changed=8  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-01-17 00:51:57.654133 | orchestrator | testbed-node-4 : ok=16  changed=8  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-01-17 00:51:57.654140 | orchestrator | testbed-node-5 : ok=16  changed=8  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-01-17 00:51:57.654147 | orchestrator | 2026-01-17 00:51:57.654154 | orchestrator | 2026-01-17 00:51:57.654161 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-17 00:51:57.654168 | orchestrator | Saturday 17 January 2026 00:51:54 +0000 (0:00:00.589) 0:04:27.454 ****** 2026-01-17 00:51:57.654176 | orchestrator | =============================================================================== 2026-01-17 00:51:57.654183 | orchestrator | k3s_server : Verify that all nodes actually joined (check k3s-init.service if this fails) -- 43.31s 2026-01-17 00:51:57.654190 | orchestrator | k3s_server_post : Wait for Cilium resources ---------------------------- 42.44s 2026-01-17 00:51:57.654197 | orchestrator | k3s_server : Enable and check K3s service ------------------------------ 25.00s 2026-01-17 00:51:57.654209 | orchestrator | kubectl : Install required packages ------------------------------------ 13.61s 2026-01-17 00:51:57.654217 | orchestrator | Manage labels ---------------------------------------------------------- 13.30s 2026-01-17 00:51:57.654224 | orchestrator | k3s_agent : Manage k3s service ------------------------------------------ 9.85s 2026-01-17 00:51:57.654231 | orchestrator | kubectl : Add repository Debian ----------------------------------------- 9.44s 2026-01-17 00:51:57.654244 | orchestrator | k9s : Install k9s packages ---------------------------------------------- 6.20s 2026-01-17 00:51:57.654251 | orchestrator | k3s_download : Download k3s binary x64 ---------------------------------- 5.12s 2026-01-17 00:51:57.654258 | orchestrator | k3s_server_post : Install Cilium ---------------------------------------- 4.75s 2026-01-17 00:51:57.654266 | orchestrator | k3s_server : Remove manifests and folders that are only needed for bootstrapping cluster so k3s doesn't auto apply on start --- 3.38s 2026-01-17 00:51:57.654273 | orchestrator | k3s_server : Set _kube_vip_bgp_peers fact ------------------------------- 3.11s 2026-01-17 00:51:57.654283 | orchestrator | k3s_server : Detect Kubernetes version for label compatibility ---------- 2.78s 2026-01-17 00:51:57.654291 | orchestrator | k3s_prereq : Enable IPv4 forwarding ------------------------------------- 2.50s 2026-01-17 00:51:57.654299 | orchestrator | k3s_prereq : Enable IPv6 router advertisements -------------------------- 2.31s 2026-01-17 00:51:57.654306 | orchestrator | k3s_download : Download k3s binary arm64 -------------------------------- 2.18s 2026-01-17 00:51:57.654313 | orchestrator | k3s_server : Create custom resolv.conf for k3s -------------------------- 2.16s 2026-01-17 00:51:57.654320 | orchestrator | k3s_prereq : Enable IPv6 forwarding ------------------------------------- 1.95s 2026-01-17 00:51:57.654327 | orchestrator | k3s_download : Download k3s binary armhf -------------------------------- 1.93s 2026-01-17 00:51:57.654334 | orchestrator | Make kubeconfig available for use inside the manager service ------------ 1.92s 2026-01-17 00:51:57.654341 | orchestrator | 2026-01-17 00:51:57 | INFO  | Task d48b0165-8a02-469c-a595-d542ee56f3fd is in state STARTED 2026-01-17 00:51:57.654349 | orchestrator | 2026-01-17 00:51:57 | INFO  | Task d304a805-bccd-4134-89bf-7ecfbc27087a is in state STARTED 2026-01-17 00:51:57.654356 | orchestrator | 2026-01-17 00:51:57 | INFO  | Task 7002a706-1e96-40e0-a78c-3e70f3715b86 is in state STARTED 2026-01-17 00:51:57.654363 | orchestrator | 2026-01-17 00:51:57 | INFO  | Task 59e97358-19ce-494e-9924-564a00a1f55f is in state STARTED 2026-01-17 00:51:57.654371 | orchestrator | 2026-01-17 00:51:57 | INFO  | Task 1a06fc1e-dccf-48c7-9307-96e1c4567bb9 is in state STARTED 2026-01-17 00:51:57.654997 | orchestrator | 2026-01-17 00:51:57 | INFO  | Task 154fae25-cd35-4438-bf9a-016d609d7368 is in state STARTED 2026-01-17 00:51:57.655025 | orchestrator | 2026-01-17 00:51:57 | INFO  | Wait 1 second(s) until the next check 2026-01-17 00:52:00.712609 | orchestrator | 2026-01-17 00:52:00 | INFO  | Task d48b0165-8a02-469c-a595-d542ee56f3fd is in state STARTED 2026-01-17 00:52:00.712708 | orchestrator | 2026-01-17 00:52:00 | INFO  | Task d304a805-bccd-4134-89bf-7ecfbc27087a is in state STARTED 2026-01-17 00:52:00.714181 | orchestrator | 2026-01-17 00:52:00 | INFO  | Task 7002a706-1e96-40e0-a78c-3e70f3715b86 is in state STARTED 2026-01-17 00:52:00.715661 | orchestrator | 2026-01-17 00:52:00 | INFO  | Task 59e97358-19ce-494e-9924-564a00a1f55f is in state STARTED 2026-01-17 00:52:00.717459 | orchestrator | 2026-01-17 00:52:00 | INFO  | Task 1a06fc1e-dccf-48c7-9307-96e1c4567bb9 is in state STARTED 2026-01-17 00:52:00.718640 | orchestrator | 2026-01-17 00:52:00 | INFO  | Task 154fae25-cd35-4438-bf9a-016d609d7368 is in state STARTED 2026-01-17 00:52:00.718730 | orchestrator | 2026-01-17 00:52:00 | INFO  | Wait 1 second(s) until the next check 2026-01-17 00:52:03.751654 | orchestrator | 2026-01-17 00:52:03 | INFO  | Task d48b0165-8a02-469c-a595-d542ee56f3fd is in state STARTED 2026-01-17 00:52:03.752084 | orchestrator | 2026-01-17 00:52:03 | INFO  | Task d304a805-bccd-4134-89bf-7ecfbc27087a is in state SUCCESS 2026-01-17 00:52:03.752858 | orchestrator | 2026-01-17 00:52:03 | INFO  | Task 7002a706-1e96-40e0-a78c-3e70f3715b86 is in state STARTED 2026-01-17 00:52:03.753811 | orchestrator | 2026-01-17 00:52:03 | INFO  | Task 59e97358-19ce-494e-9924-564a00a1f55f is in state STARTED 2026-01-17 00:52:03.754808 | orchestrator | 2026-01-17 00:52:03 | INFO  | Task 1a06fc1e-dccf-48c7-9307-96e1c4567bb9 is in state STARTED 2026-01-17 00:52:03.755716 | orchestrator | 2026-01-17 00:52:03 | INFO  | Task 154fae25-cd35-4438-bf9a-016d609d7368 is in state STARTED 2026-01-17 00:52:03.755754 | orchestrator | 2026-01-17 00:52:03 | INFO  | Wait 1 second(s) until the next check 2026-01-17 00:52:06.791738 | orchestrator | 2026-01-17 00:52:06 | INFO  | Task d48b0165-8a02-469c-a595-d542ee56f3fd is in state STARTED 2026-01-17 00:52:06.791840 | orchestrator | 2026-01-17 00:52:06 | INFO  | Task 7002a706-1e96-40e0-a78c-3e70f3715b86 is in state STARTED 2026-01-17 00:52:06.795983 | orchestrator | 2026-01-17 00:52:06 | INFO  | Task 59e97358-19ce-494e-9924-564a00a1f55f is in state STARTED 2026-01-17 00:52:06.796719 | orchestrator | 2026-01-17 00:52:06 | INFO  | Task 1a06fc1e-dccf-48c7-9307-96e1c4567bb9 is in state STARTED 2026-01-17 00:52:06.797450 | orchestrator | 2026-01-17 00:52:06 | INFO  | Task 154fae25-cd35-4438-bf9a-016d609d7368 is in state STARTED 2026-01-17 00:52:06.797485 | orchestrator | 2026-01-17 00:52:06 | INFO  | Wait 1 second(s) until the next check 2026-01-17 00:52:09.839751 | orchestrator | 2026-01-17 00:52:09 | INFO  | Task d48b0165-8a02-469c-a595-d542ee56f3fd is in state SUCCESS 2026-01-17 00:52:09.841937 | orchestrator | 2026-01-17 00:52:09 | INFO  | Task 7002a706-1e96-40e0-a78c-3e70f3715b86 is in state STARTED 2026-01-17 00:52:09.844196 | orchestrator | 2026-01-17 00:52:09 | INFO  | Task 59e97358-19ce-494e-9924-564a00a1f55f is in state STARTED 2026-01-17 00:52:09.846178 | orchestrator | 2026-01-17 00:52:09 | INFO  | Task 1a06fc1e-dccf-48c7-9307-96e1c4567bb9 is in state STARTED 2026-01-17 00:52:09.847961 | orchestrator | 2026-01-17 00:52:09 | INFO  | Task 154fae25-cd35-4438-bf9a-016d609d7368 is in state STARTED 2026-01-17 00:52:09.848002 | orchestrator | 2026-01-17 00:52:09 | INFO  | Wait 1 second(s) until the next check 2026-01-17 00:52:12.895266 | orchestrator | 2026-01-17 00:52:12 | INFO  | Task 7002a706-1e96-40e0-a78c-3e70f3715b86 is in state STARTED 2026-01-17 00:52:12.900237 | orchestrator | 2026-01-17 00:52:12 | INFO  | Task 59e97358-19ce-494e-9924-564a00a1f55f is in state STARTED 2026-01-17 00:52:12.900869 | orchestrator | 2026-01-17 00:52:12 | INFO  | Task 1a06fc1e-dccf-48c7-9307-96e1c4567bb9 is in state STARTED 2026-01-17 00:52:12.901855 | orchestrator | 2026-01-17 00:52:12 | INFO  | Task 154fae25-cd35-4438-bf9a-016d609d7368 is in state STARTED 2026-01-17 00:52:12.901889 | orchestrator | 2026-01-17 00:52:12 | INFO  | Wait 1 second(s) until the next check 2026-01-17 00:52:15.962697 | orchestrator | 2026-01-17 00:52:15 | INFO  | Task 7002a706-1e96-40e0-a78c-3e70f3715b86 is in state STARTED 2026-01-17 00:52:15.965232 | orchestrator | 2026-01-17 00:52:15 | INFO  | Task 59e97358-19ce-494e-9924-564a00a1f55f is in state STARTED 2026-01-17 00:52:15.967662 | orchestrator | 2026-01-17 00:52:15 | INFO  | Task 1a06fc1e-dccf-48c7-9307-96e1c4567bb9 is in state STARTED 2026-01-17 00:52:15.970871 | orchestrator | 2026-01-17 00:52:15 | INFO  | Task 154fae25-cd35-4438-bf9a-016d609d7368 is in state STARTED 2026-01-17 00:52:15.970958 | orchestrator | 2026-01-17 00:52:15 | INFO  | Wait 1 second(s) until the next check 2026-01-17 00:52:19.009570 | orchestrator | 2026-01-17 00:52:19 | INFO  | Task 7002a706-1e96-40e0-a78c-3e70f3715b86 is in state STARTED 2026-01-17 00:52:19.009963 | orchestrator | 2026-01-17 00:52:19 | INFO  | Task 59e97358-19ce-494e-9924-564a00a1f55f is in state STARTED 2026-01-17 00:52:19.010723 | orchestrator | 2026-01-17 00:52:19 | INFO  | Task 1a06fc1e-dccf-48c7-9307-96e1c4567bb9 is in state STARTED 2026-01-17 00:52:19.011662 | orchestrator | 2026-01-17 00:52:19 | INFO  | Task 154fae25-cd35-4438-bf9a-016d609d7368 is in state STARTED 2026-01-17 00:52:19.011726 | orchestrator | 2026-01-17 00:52:19 | INFO  | Wait 1 second(s) until the next check 2026-01-17 00:52:22.091234 | orchestrator | 2026-01-17 00:52:22 | INFO  | Task 7002a706-1e96-40e0-a78c-3e70f3715b86 is in state STARTED 2026-01-17 00:52:22.091452 | orchestrator | 2026-01-17 00:52:22 | INFO  | Task 59e97358-19ce-494e-9924-564a00a1f55f is in state STARTED 2026-01-17 00:52:22.092900 | orchestrator | 2026-01-17 00:52:22 | INFO  | Task 1a06fc1e-dccf-48c7-9307-96e1c4567bb9 is in state STARTED 2026-01-17 00:52:22.093675 | orchestrator | 2026-01-17 00:52:22 | INFO  | Task 154fae25-cd35-4438-bf9a-016d609d7368 is in state STARTED 2026-01-17 00:52:22.093720 | orchestrator | 2026-01-17 00:52:22 | INFO  | Wait 1 second(s) until the next check 2026-01-17 00:52:25.128075 | orchestrator | 2026-01-17 00:52:25 | INFO  | Task 7002a706-1e96-40e0-a78c-3e70f3715b86 is in state STARTED 2026-01-17 00:52:25.129853 | orchestrator | 2026-01-17 00:52:25 | INFO  | Task 59e97358-19ce-494e-9924-564a00a1f55f is in state STARTED 2026-01-17 00:52:25.131521 | orchestrator | 2026-01-17 00:52:25 | INFO  | Task 1a06fc1e-dccf-48c7-9307-96e1c4567bb9 is in state STARTED 2026-01-17 00:52:25.133977 | orchestrator | 2026-01-17 00:52:25 | INFO  | Task 154fae25-cd35-4438-bf9a-016d609d7368 is in state STARTED 2026-01-17 00:52:25.134071 | orchestrator | 2026-01-17 00:52:25 | INFO  | Wait 1 second(s) until the next check 2026-01-17 00:52:28.175877 | orchestrator | 2026-01-17 00:52:28 | INFO  | Task 7002a706-1e96-40e0-a78c-3e70f3715b86 is in state STARTED 2026-01-17 00:52:28.180961 | orchestrator | 2026-01-17 00:52:28 | INFO  | Task 59e97358-19ce-494e-9924-564a00a1f55f is in state STARTED 2026-01-17 00:52:28.182148 | orchestrator | 2026-01-17 00:52:28 | INFO  | Task 1a06fc1e-dccf-48c7-9307-96e1c4567bb9 is in state STARTED 2026-01-17 00:52:28.183984 | orchestrator | 2026-01-17 00:52:28 | INFO  | Task 154fae25-cd35-4438-bf9a-016d609d7368 is in state STARTED 2026-01-17 00:52:28.184047 | orchestrator | 2026-01-17 00:52:28 | INFO  | Wait 1 second(s) until the next check 2026-01-17 00:52:31.221215 | orchestrator | 2026-01-17 00:52:31 | INFO  | Task 7002a706-1e96-40e0-a78c-3e70f3715b86 is in state STARTED 2026-01-17 00:52:31.221308 | orchestrator | 2026-01-17 00:52:31 | INFO  | Task 59e97358-19ce-494e-9924-564a00a1f55f is in state STARTED 2026-01-17 00:52:31.221994 | orchestrator | 2026-01-17 00:52:31 | INFO  | Task 1a06fc1e-dccf-48c7-9307-96e1c4567bb9 is in state STARTED 2026-01-17 00:52:31.222658 | orchestrator | 2026-01-17 00:52:31 | INFO  | Task 154fae25-cd35-4438-bf9a-016d609d7368 is in state STARTED 2026-01-17 00:52:31.222703 | orchestrator | 2026-01-17 00:52:31 | INFO  | Wait 1 second(s) until the next check 2026-01-17 00:52:34.265849 | orchestrator | 2026-01-17 00:52:34 | INFO  | Task 7002a706-1e96-40e0-a78c-3e70f3715b86 is in state STARTED 2026-01-17 00:52:34.265955 | orchestrator | 2026-01-17 00:52:34 | INFO  | Task 59e97358-19ce-494e-9924-564a00a1f55f is in state STARTED 2026-01-17 00:52:34.266740 | orchestrator | 2026-01-17 00:52:34 | INFO  | Task 1a06fc1e-dccf-48c7-9307-96e1c4567bb9 is in state STARTED 2026-01-17 00:52:34.267475 | orchestrator | 2026-01-17 00:52:34 | INFO  | Task 154fae25-cd35-4438-bf9a-016d609d7368 is in state STARTED 2026-01-17 00:52:34.267520 | orchestrator | 2026-01-17 00:52:34 | INFO  | Wait 1 second(s) until the next check 2026-01-17 00:52:37.319789 | orchestrator | 2026-01-17 00:52:37 | INFO  | Task 7002a706-1e96-40e0-a78c-3e70f3715b86 is in state STARTED 2026-01-17 00:52:37.320455 | orchestrator | 2026-01-17 00:52:37 | INFO  | Task 59e97358-19ce-494e-9924-564a00a1f55f is in state STARTED 2026-01-17 00:52:37.321703 | orchestrator | 2026-01-17 00:52:37 | INFO  | Task 1a06fc1e-dccf-48c7-9307-96e1c4567bb9 is in state STARTED 2026-01-17 00:52:37.323189 | orchestrator | 2026-01-17 00:52:37 | INFO  | Task 154fae25-cd35-4438-bf9a-016d609d7368 is in state STARTED 2026-01-17 00:52:37.323237 | orchestrator | 2026-01-17 00:52:37 | INFO  | Wait 1 second(s) until the next check 2026-01-17 00:52:40.360637 | orchestrator | 2026-01-17 00:52:40 | INFO  | Task 7002a706-1e96-40e0-a78c-3e70f3715b86 is in state STARTED 2026-01-17 00:52:40.362415 | orchestrator | 2026-01-17 00:52:40 | INFO  | Task 59e97358-19ce-494e-9924-564a00a1f55f is in state STARTED 2026-01-17 00:52:40.363818 | orchestrator | 2026-01-17 00:52:40 | INFO  | Task 1a06fc1e-dccf-48c7-9307-96e1c4567bb9 is in state STARTED 2026-01-17 00:52:40.365532 | orchestrator | 2026-01-17 00:52:40 | INFO  | Task 154fae25-cd35-4438-bf9a-016d609d7368 is in state STARTED 2026-01-17 00:52:40.365737 | orchestrator | 2026-01-17 00:52:40 | INFO  | Wait 1 second(s) until the next check 2026-01-17 00:52:43.400225 | orchestrator | 2026-01-17 00:52:43 | INFO  | Task 7002a706-1e96-40e0-a78c-3e70f3715b86 is in state STARTED 2026-01-17 00:52:43.401083 | orchestrator | 2026-01-17 00:52:43 | INFO  | Task 59e97358-19ce-494e-9924-564a00a1f55f is in state STARTED 2026-01-17 00:52:43.402934 | orchestrator | 2026-01-17 00:52:43 | INFO  | Task 1a06fc1e-dccf-48c7-9307-96e1c4567bb9 is in state STARTED 2026-01-17 00:52:43.404630 | orchestrator | 2026-01-17 00:52:43 | INFO  | Task 154fae25-cd35-4438-bf9a-016d609d7368 is in state SUCCESS 2026-01-17 00:52:43.404826 | orchestrator | 2026-01-17 00:52:43 | INFO  | Wait 1 second(s) until the next check 2026-01-17 00:52:43.405784 | orchestrator | 2026-01-17 00:52:43.405802 | orchestrator | 2026-01-17 00:52:43.405807 | orchestrator | PLAY [Copy kubeconfig to the configuration repository] ************************* 2026-01-17 00:52:43.405811 | orchestrator | 2026-01-17 00:52:43.405816 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2026-01-17 00:52:43.405821 | orchestrator | Saturday 17 January 2026 00:52:00 +0000 (0:00:00.156) 0:00:00.156 ****** 2026-01-17 00:52:43.405826 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2026-01-17 00:52:43.405830 | orchestrator | 2026-01-17 00:52:43.405834 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2026-01-17 00:52:43.405839 | orchestrator | Saturday 17 January 2026 00:52:01 +0000 (0:00:00.845) 0:00:01.001 ****** 2026-01-17 00:52:43.405843 | orchestrator | changed: [testbed-manager] 2026-01-17 00:52:43.405847 | orchestrator | 2026-01-17 00:52:43.405851 | orchestrator | TASK [Change server address in the kubeconfig file] **************************** 2026-01-17 00:52:43.405855 | orchestrator | Saturday 17 January 2026 00:52:02 +0000 (0:00:01.664) 0:00:02.665 ****** 2026-01-17 00:52:43.405859 | orchestrator | changed: [testbed-manager] 2026-01-17 00:52:43.405863 | orchestrator | 2026-01-17 00:52:43.405867 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-17 00:52:43.405871 | orchestrator | testbed-manager : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-17 00:52:43.405877 | orchestrator | 2026-01-17 00:52:43.405881 | orchestrator | 2026-01-17 00:52:43.405885 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-17 00:52:43.405889 | orchestrator | Saturday 17 January 2026 00:52:03 +0000 (0:00:00.523) 0:00:03.189 ****** 2026-01-17 00:52:43.405893 | orchestrator | =============================================================================== 2026-01-17 00:52:43.405896 | orchestrator | Write kubeconfig file --------------------------------------------------- 1.66s 2026-01-17 00:52:43.405900 | orchestrator | Get kubeconfig file ----------------------------------------------------- 0.85s 2026-01-17 00:52:43.405904 | orchestrator | Change server address in the kubeconfig file ---------------------------- 0.52s 2026-01-17 00:52:43.405928 | orchestrator | 2026-01-17 00:52:43.405932 | orchestrator | 2026-01-17 00:52:43.405947 | orchestrator | PLAY [Prepare kubeconfig file] ************************************************* 2026-01-17 00:52:43.405951 | orchestrator | 2026-01-17 00:52:43.405955 | orchestrator | TASK [Get home directory of operator user] ************************************* 2026-01-17 00:52:43.405958 | orchestrator | Saturday 17 January 2026 00:51:59 +0000 (0:00:00.145) 0:00:00.145 ****** 2026-01-17 00:52:43.405962 | orchestrator | ok: [testbed-manager] 2026-01-17 00:52:43.405968 | orchestrator | 2026-01-17 00:52:43.405972 | orchestrator | TASK [Create .kube directory] ************************************************** 2026-01-17 00:52:43.405976 | orchestrator | Saturday 17 January 2026 00:51:59 +0000 (0:00:00.491) 0:00:00.637 ****** 2026-01-17 00:52:43.405980 | orchestrator | ok: [testbed-manager] 2026-01-17 00:52:43.405984 | orchestrator | 2026-01-17 00:52:43.405988 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2026-01-17 00:52:43.405992 | orchestrator | Saturday 17 January 2026 00:52:00 +0000 (0:00:00.658) 0:00:01.295 ****** 2026-01-17 00:52:43.405996 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2026-01-17 00:52:43.406000 | orchestrator | 2026-01-17 00:52:43.406004 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2026-01-17 00:52:43.406008 | orchestrator | Saturday 17 January 2026 00:52:01 +0000 (0:00:00.805) 0:00:02.101 ****** 2026-01-17 00:52:43.406012 | orchestrator | changed: [testbed-manager] 2026-01-17 00:52:43.406053 | orchestrator | 2026-01-17 00:52:43.406057 | orchestrator | TASK [Change server address in the kubeconfig] ********************************* 2026-01-17 00:52:43.406061 | orchestrator | Saturday 17 January 2026 00:52:03 +0000 (0:00:02.308) 0:00:04.410 ****** 2026-01-17 00:52:43.406065 | orchestrator | changed: [testbed-manager] 2026-01-17 00:52:43.406069 | orchestrator | 2026-01-17 00:52:43.406073 | orchestrator | TASK [Make kubeconfig available for use inside the manager service] ************ 2026-01-17 00:52:43.406077 | orchestrator | Saturday 17 January 2026 00:52:04 +0000 (0:00:00.560) 0:00:04.970 ****** 2026-01-17 00:52:43.406081 | orchestrator | changed: [testbed-manager -> localhost] 2026-01-17 00:52:43.406086 | orchestrator | 2026-01-17 00:52:43.406090 | orchestrator | TASK [Change server address in the kubeconfig inside the manager service] ****** 2026-01-17 00:52:43.406094 | orchestrator | Saturday 17 January 2026 00:52:05 +0000 (0:00:01.584) 0:00:06.555 ****** 2026-01-17 00:52:43.406098 | orchestrator | changed: [testbed-manager -> localhost] 2026-01-17 00:52:43.406102 | orchestrator | 2026-01-17 00:52:43.406106 | orchestrator | TASK [Set KUBECONFIG environment variable] ************************************* 2026-01-17 00:52:43.406110 | orchestrator | Saturday 17 January 2026 00:52:06 +0000 (0:00:00.887) 0:00:07.442 ****** 2026-01-17 00:52:43.406114 | orchestrator | ok: [testbed-manager] 2026-01-17 00:52:43.406118 | orchestrator | 2026-01-17 00:52:43.406122 | orchestrator | TASK [Enable kubectl command line completion] ********************************** 2026-01-17 00:52:43.406126 | orchestrator | Saturday 17 January 2026 00:52:07 +0000 (0:00:00.426) 0:00:07.869 ****** 2026-01-17 00:52:43.406130 | orchestrator | ok: [testbed-manager] 2026-01-17 00:52:43.406134 | orchestrator | 2026-01-17 00:52:43.406138 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-17 00:52:43.406142 | orchestrator | testbed-manager : ok=9  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-17 00:52:43.406146 | orchestrator | 2026-01-17 00:52:43.406150 | orchestrator | 2026-01-17 00:52:43.406154 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-17 00:52:43.406158 | orchestrator | Saturday 17 January 2026 00:52:07 +0000 (0:00:00.337) 0:00:08.207 ****** 2026-01-17 00:52:43.406162 | orchestrator | =============================================================================== 2026-01-17 00:52:43.406166 | orchestrator | Write kubeconfig file --------------------------------------------------- 2.31s 2026-01-17 00:52:43.406170 | orchestrator | Make kubeconfig available for use inside the manager service ------------ 1.58s 2026-01-17 00:52:43.406174 | orchestrator | Change server address in the kubeconfig inside the manager service ------ 0.89s 2026-01-17 00:52:43.406191 | orchestrator | Get kubeconfig file ----------------------------------------------------- 0.81s 2026-01-17 00:52:43.406195 | orchestrator | Create .kube directory -------------------------------------------------- 0.66s 2026-01-17 00:52:43.406199 | orchestrator | Change server address in the kubeconfig --------------------------------- 0.56s 2026-01-17 00:52:43.406203 | orchestrator | Get home directory of operator user ------------------------------------- 0.49s 2026-01-17 00:52:43.406206 | orchestrator | Set KUBECONFIG environment variable ------------------------------------- 0.43s 2026-01-17 00:52:43.406210 | orchestrator | Enable kubectl command line completion ---------------------------------- 0.34s 2026-01-17 00:52:43.406214 | orchestrator | 2026-01-17 00:52:43.406218 | orchestrator | 2026-01-17 00:52:43.406221 | orchestrator | PLAY [Set kolla_action_rabbitmq] *********************************************** 2026-01-17 00:52:43.406225 | orchestrator | 2026-01-17 00:52:43.406229 | orchestrator | TASK [Inform the user about the following task] ******************************** 2026-01-17 00:52:43.406233 | orchestrator | Saturday 17 January 2026 00:50:20 +0000 (0:00:00.089) 0:00:00.089 ****** 2026-01-17 00:52:43.406237 | orchestrator | ok: [localhost] => { 2026-01-17 00:52:43.406241 | orchestrator |  "msg": "The task 'Check RabbitMQ service' fails if the RabbitMQ service has not yet been deployed. This is fine." 2026-01-17 00:52:43.406245 | orchestrator | } 2026-01-17 00:52:43.406250 | orchestrator | 2026-01-17 00:52:43.406254 | orchestrator | TASK [Check RabbitMQ service] ************************************************** 2026-01-17 00:52:43.406257 | orchestrator | Saturday 17 January 2026 00:50:20 +0000 (0:00:00.069) 0:00:00.159 ****** 2026-01-17 00:52:43.406262 | orchestrator | fatal: [localhost]: FAILED! => {"changed": false, "elapsed": 1, "msg": "Timeout when waiting for search string RabbitMQ Management in 192.168.16.9:15672"} 2026-01-17 00:52:43.406268 | orchestrator | ...ignoring 2026-01-17 00:52:43.406272 | orchestrator | 2026-01-17 00:52:43.406276 | orchestrator | TASK [Set kolla_action_rabbitmq = upgrade if RabbitMQ is already running] ****** 2026-01-17 00:52:43.406279 | orchestrator | Saturday 17 January 2026 00:50:23 +0000 (0:00:02.628) 0:00:02.787 ****** 2026-01-17 00:52:43.406283 | orchestrator | skipping: [localhost] 2026-01-17 00:52:43.406287 | orchestrator | 2026-01-17 00:52:43.406291 | orchestrator | TASK [Set kolla_action_rabbitmq = kolla_action_ng] ***************************** 2026-01-17 00:52:43.406295 | orchestrator | Saturday 17 January 2026 00:50:23 +0000 (0:00:00.170) 0:00:02.958 ****** 2026-01-17 00:52:43.406299 | orchestrator | ok: [localhost] 2026-01-17 00:52:43.406303 | orchestrator | 2026-01-17 00:52:43.406307 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-17 00:52:43.406310 | orchestrator | 2026-01-17 00:52:43.406315 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-01-17 00:52:43.406318 | orchestrator | Saturday 17 January 2026 00:50:24 +0000 (0:00:00.568) 0:00:03.527 ****** 2026-01-17 00:52:43.406322 | orchestrator | ok: [testbed-node-0] 2026-01-17 00:52:43.406326 | orchestrator | ok: [testbed-node-1] 2026-01-17 00:52:43.406350 | orchestrator | ok: [testbed-node-2] 2026-01-17 00:52:43.406354 | orchestrator | 2026-01-17 00:52:43.406358 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-17 00:52:43.406362 | orchestrator | Saturday 17 January 2026 00:50:24 +0000 (0:00:00.740) 0:00:04.268 ****** 2026-01-17 00:52:43.406366 | orchestrator | ok: [testbed-node-0] => (item=enable_rabbitmq_True) 2026-01-17 00:52:43.406370 | orchestrator | ok: [testbed-node-1] => (item=enable_rabbitmq_True) 2026-01-17 00:52:43.406374 | orchestrator | ok: [testbed-node-2] => (item=enable_rabbitmq_True) 2026-01-17 00:52:43.406378 | orchestrator | 2026-01-17 00:52:43.406382 | orchestrator | PLAY [Apply role rabbitmq] ***************************************************** 2026-01-17 00:52:43.406385 | orchestrator | 2026-01-17 00:52:43.406389 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2026-01-17 00:52:43.406393 | orchestrator | Saturday 17 January 2026 00:50:25 +0000 (0:00:00.958) 0:00:05.226 ****** 2026-01-17 00:52:43.406397 | orchestrator | included: /ansible/roles/rabbitmq/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-17 00:52:43.406405 | orchestrator | 2026-01-17 00:52:43.406409 | orchestrator | TASK [rabbitmq : Get container facts] ****************************************** 2026-01-17 00:52:43.406413 | orchestrator | Saturday 17 January 2026 00:50:26 +0000 (0:00:00.729) 0:00:05.955 ****** 2026-01-17 00:52:43.406418 | orchestrator | ok: [testbed-node-0] 2026-01-17 00:52:43.406422 | orchestrator | 2026-01-17 00:52:43.406426 | orchestrator | TASK [rabbitmq : Get current RabbitMQ version] ********************************* 2026-01-17 00:52:43.406430 | orchestrator | Saturday 17 January 2026 00:50:27 +0000 (0:00:01.227) 0:00:07.183 ****** 2026-01-17 00:52:43.406435 | orchestrator | skipping: [testbed-node-0] 2026-01-17 00:52:43.406439 | orchestrator | 2026-01-17 00:52:43.406443 | orchestrator | TASK [rabbitmq : Get new RabbitMQ version] ************************************* 2026-01-17 00:52:43.406447 | orchestrator | Saturday 17 January 2026 00:50:28 +0000 (0:00:00.363) 0:00:07.547 ****** 2026-01-17 00:52:43.406452 | orchestrator | skipping: [testbed-node-0] 2026-01-17 00:52:43.406456 | orchestrator | 2026-01-17 00:52:43.406460 | orchestrator | TASK [rabbitmq : Check if running RabbitMQ is at most one version behind] ****** 2026-01-17 00:52:43.406465 | orchestrator | Saturday 17 January 2026 00:50:28 +0000 (0:00:00.290) 0:00:07.838 ****** 2026-01-17 00:52:43.406469 | orchestrator | skipping: [testbed-node-0] 2026-01-17 00:52:43.406473 | orchestrator | 2026-01-17 00:52:43.406477 | orchestrator | TASK [rabbitmq : Catch when RabbitMQ is being downgraded] ********************** 2026-01-17 00:52:43.406482 | orchestrator | Saturday 17 January 2026 00:50:28 +0000 (0:00:00.382) 0:00:08.221 ****** 2026-01-17 00:52:43.406486 | orchestrator | skipping: [testbed-node-0] 2026-01-17 00:52:43.406490 | orchestrator | 2026-01-17 00:52:43.406494 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2026-01-17 00:52:43.406499 | orchestrator | Saturday 17 January 2026 00:50:29 +0000 (0:00:00.723) 0:00:08.945 ****** 2026-01-17 00:52:43.406503 | orchestrator | included: /ansible/roles/rabbitmq/tasks/remove-ha-all-policy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-17 00:52:43.406507 | orchestrator | 2026-01-17 00:52:43.406511 | orchestrator | TASK [rabbitmq : Get container facts] ****************************************** 2026-01-17 00:52:43.406518 | orchestrator | Saturday 17 January 2026 00:50:30 +0000 (0:00:01.107) 0:00:10.052 ****** 2026-01-17 00:52:43.406523 | orchestrator | ok: [testbed-node-0] 2026-01-17 00:52:43.406527 | orchestrator | 2026-01-17 00:52:43.406531 | orchestrator | TASK [rabbitmq : List RabbitMQ policies] *************************************** 2026-01-17 00:52:43.406544 | orchestrator | Saturday 17 January 2026 00:50:31 +0000 (0:00:01.247) 0:00:11.299 ****** 2026-01-17 00:52:43.406550 | orchestrator | skipping: [testbed-node-0] 2026-01-17 00:52:43.406557 | orchestrator | 2026-01-17 00:52:43.406563 | orchestrator | TASK [rabbitmq : Remove ha-all policy from RabbitMQ] *************************** 2026-01-17 00:52:43.406570 | orchestrator | Saturday 17 January 2026 00:50:32 +0000 (0:00:00.426) 0:00:11.726 ****** 2026-01-17 00:52:43.406576 | orchestrator | skipping: [testbed-node-0] 2026-01-17 00:52:43.406582 | orchestrator | 2026-01-17 00:52:43.406588 | orchestrator | TASK [rabbitmq : Ensuring config directories exist] **************************** 2026-01-17 00:52:43.406595 | orchestrator | Saturday 17 January 2026 00:50:32 +0000 (0:00:00.355) 0:00:12.082 ****** 2026-01-17 00:52:43.406671 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-01-17 00:52:43.406696 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-01-17 00:52:43.406704 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-01-17 00:52:43.406710 | orchestrator | 2026-01-17 00:52:43.406715 | orchestrator | TASK [rabbitmq : Copying over config.json files for services] ****************** 2026-01-17 00:52:43.406720 | orchestrator | Saturday 17 January 2026 00:50:33 +0000 (0:00:01.261) 0:00:13.344 ****** 2026-01-17 00:52:43.406729 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-01-17 00:52:43.406738 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-01-17 00:52:43.406746 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-01-17 00:52:43.406751 | orchestrator | 2026-01-17 00:52:43.406755 | orchestrator | TASK [rabbitmq : Copying over rabbitmq-env.conf] ******************************* 2026-01-17 00:52:43.406760 | orchestrator | Saturday 17 January 2026 00:50:38 +0000 (0:00:04.576) 0:00:17.920 ****** 2026-01-17 00:52:43.406764 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2026-01-17 00:52:43.406769 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2026-01-17 00:52:43.406773 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2026-01-17 00:52:43.406777 | orchestrator | 2026-01-17 00:52:43.406781 | orchestrator | TASK [rabbitmq : Copying over rabbitmq.conf] *********************************** 2026-01-17 00:52:43.406784 | orchestrator | Saturday 17 January 2026 00:50:41 +0000 (0:00:02.708) 0:00:20.629 ****** 2026-01-17 00:52:43.406788 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2026-01-17 00:52:43.406792 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2026-01-17 00:52:43.406796 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2026-01-17 00:52:43.406799 | orchestrator | 2026-01-17 00:52:43.406803 | orchestrator | TASK [rabbitmq : Copying over erl_inetrc] ************************************** 2026-01-17 00:52:43.406809 | orchestrator | Saturday 17 January 2026 00:50:43 +0000 (0:00:02.254) 0:00:22.884 ****** 2026-01-17 00:52:43.406813 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2026-01-17 00:52:43.406817 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2026-01-17 00:52:43.406821 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2026-01-17 00:52:43.406824 | orchestrator | 2026-01-17 00:52:43.406828 | orchestrator | TASK [rabbitmq : Copying over advanced.config] ********************************* 2026-01-17 00:52:43.406832 | orchestrator | Saturday 17 January 2026 00:50:45 +0000 (0:00:02.259) 0:00:25.143 ****** 2026-01-17 00:52:43.406836 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2026-01-17 00:52:43.406839 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2026-01-17 00:52:43.406846 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2026-01-17 00:52:43.406850 | orchestrator | 2026-01-17 00:52:43.406853 | orchestrator | TASK [rabbitmq : Copying over definitions.json] ******************************** 2026-01-17 00:52:43.406857 | orchestrator | Saturday 17 January 2026 00:50:47 +0000 (0:00:02.165) 0:00:27.309 ****** 2026-01-17 00:52:43.406861 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2026-01-17 00:52:43.406864 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2026-01-17 00:52:43.406868 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2026-01-17 00:52:43.406872 | orchestrator | 2026-01-17 00:52:43.406876 | orchestrator | TASK [rabbitmq : Copying over enabled_plugins] ********************************* 2026-01-17 00:52:43.406879 | orchestrator | Saturday 17 January 2026 00:50:49 +0000 (0:00:01.598) 0:00:28.908 ****** 2026-01-17 00:52:43.406883 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2026-01-17 00:52:43.406890 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2026-01-17 00:52:43.406893 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2026-01-17 00:52:43.406897 | orchestrator | 2026-01-17 00:52:43.406901 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2026-01-17 00:52:43.406905 | orchestrator | Saturday 17 January 2026 00:50:50 +0000 (0:00:01.472) 0:00:30.381 ****** 2026-01-17 00:52:43.406909 | orchestrator | skipping: [testbed-node-0] 2026-01-17 00:52:43.406915 | orchestrator | skipping: [testbed-node-1] 2026-01-17 00:52:43.406921 | orchestrator | skipping: [testbed-node-2] 2026-01-17 00:52:43.406927 | orchestrator | 2026-01-17 00:52:43.406933 | orchestrator | TASK [rabbitmq : Check rabbitmq containers] ************************************ 2026-01-17 00:52:43.406939 | orchestrator | Saturday 17 January 2026 00:50:51 +0000 (0:00:00.462) 0:00:30.844 ****** 2026-01-17 00:52:43.406946 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-01-17 00:52:43.406957 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-01-17 00:52:43.406968 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-01-17 00:52:43.406975 | orchestrator | 2026-01-17 00:52:43.406981 | orchestrator | TASK [rabbitmq : Creating rabbitmq volume] ************************************* 2026-01-17 00:52:43.406987 | orchestrator | Saturday 17 January 2026 00:50:53 +0000 (0:00:01.838) 0:00:32.682 ****** 2026-01-17 00:52:43.406992 | orchestrator | changed: [testbed-node-0] 2026-01-17 00:52:43.406999 | orchestrator | changed: [testbed-node-2] 2026-01-17 00:52:43.407005 | orchestrator | changed: [testbed-node-1] 2026-01-17 00:52:43.407010 | orchestrator | 2026-01-17 00:52:43.407019 | orchestrator | TASK [rabbitmq : Running RabbitMQ bootstrap container] ************************* 2026-01-17 00:52:43.407026 | orchestrator | Saturday 17 January 2026 00:50:54 +0000 (0:00:01.278) 0:00:33.960 ****** 2026-01-17 00:52:43.407032 | orchestrator | changed: [testbed-node-0] 2026-01-17 00:52:43.407037 | orchestrator | changed: [testbed-node-1] 2026-01-17 00:52:43.407043 | orchestrator | changed: [testbed-node-2] 2026-01-17 00:52:43.407049 | orchestrator | 2026-01-17 00:52:43.407055 | orchestrator | RUNNING HANDLER [rabbitmq : Restart rabbitmq container] ************************ 2026-01-17 00:52:43.407061 | orchestrator | Saturday 17 January 2026 00:51:01 +0000 (0:00:06.650) 0:00:40.611 ****** 2026-01-17 00:52:43.407068 | orchestrator | changed: [testbed-node-0] 2026-01-17 00:52:43.407074 | orchestrator | changed: [testbed-node-1] 2026-01-17 00:52:43.407087 | orchestrator | changed: [testbed-node-2] 2026-01-17 00:52:43.407093 | orchestrator | 2026-01-17 00:52:43.407097 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2026-01-17 00:52:43.407101 | orchestrator | 2026-01-17 00:52:43.407105 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2026-01-17 00:52:43.407108 | orchestrator | Saturday 17 January 2026 00:51:01 +0000 (0:00:00.322) 0:00:40.934 ****** 2026-01-17 00:52:43.407112 | orchestrator | ok: [testbed-node-0] 2026-01-17 00:52:43.407117 | orchestrator | 2026-01-17 00:52:43.407121 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2026-01-17 00:52:43.407124 | orchestrator | Saturday 17 January 2026 00:51:02 +0000 (0:00:00.619) 0:00:41.554 ****** 2026-01-17 00:52:43.407128 | orchestrator | skipping: [testbed-node-0] 2026-01-17 00:52:43.407132 | orchestrator | 2026-01-17 00:52:43.407136 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2026-01-17 00:52:43.407139 | orchestrator | Saturday 17 January 2026 00:51:02 +0000 (0:00:00.268) 0:00:41.822 ****** 2026-01-17 00:52:43.407143 | orchestrator | changed: [testbed-node-0] 2026-01-17 00:52:43.407147 | orchestrator | 2026-01-17 00:52:43.407151 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2026-01-17 00:52:43.407154 | orchestrator | Saturday 17 January 2026 00:51:04 +0000 (0:00:01.642) 0:00:43.465 ****** 2026-01-17 00:52:43.407158 | orchestrator | changed: [testbed-node-0] 2026-01-17 00:52:43.407162 | orchestrator | 2026-01-17 00:52:43.407166 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2026-01-17 00:52:43.407173 | orchestrator | 2026-01-17 00:52:43.407177 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2026-01-17 00:52:43.407181 | orchestrator | Saturday 17 January 2026 00:51:59 +0000 (0:00:55.461) 0:01:38.926 ****** 2026-01-17 00:52:43.407185 | orchestrator | ok: [testbed-node-1] 2026-01-17 00:52:43.407189 | orchestrator | 2026-01-17 00:52:43.407192 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2026-01-17 00:52:43.407196 | orchestrator | Saturday 17 January 2026 00:52:00 +0000 (0:00:00.530) 0:01:39.457 ****** 2026-01-17 00:52:43.407200 | orchestrator | skipping: [testbed-node-1] 2026-01-17 00:52:43.407204 | orchestrator | 2026-01-17 00:52:43.407207 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2026-01-17 00:52:43.407211 | orchestrator | Saturday 17 January 2026 00:52:00 +0000 (0:00:00.212) 0:01:39.669 ****** 2026-01-17 00:52:43.407215 | orchestrator | changed: [testbed-node-1] 2026-01-17 00:52:43.407219 | orchestrator | 2026-01-17 00:52:43.407222 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2026-01-17 00:52:43.407226 | orchestrator | Saturday 17 January 2026 00:52:02 +0000 (0:00:01.822) 0:01:41.492 ****** 2026-01-17 00:52:43.407230 | orchestrator | changed: [testbed-node-1] 2026-01-17 00:52:43.407234 | orchestrator | 2026-01-17 00:52:43.407237 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2026-01-17 00:52:43.407241 | orchestrator | 2026-01-17 00:52:43.407245 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2026-01-17 00:52:43.407248 | orchestrator | Saturday 17 January 2026 00:52:17 +0000 (0:00:15.957) 0:01:57.450 ****** 2026-01-17 00:52:43.407252 | orchestrator | ok: [testbed-node-2] 2026-01-17 00:52:43.407256 | orchestrator | 2026-01-17 00:52:43.407263 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2026-01-17 00:52:43.407267 | orchestrator | Saturday 17 January 2026 00:52:18 +0000 (0:00:00.745) 0:01:58.196 ****** 2026-01-17 00:52:43.407273 | orchestrator | skipping: [testbed-node-2] 2026-01-17 00:52:43.407279 | orchestrator | 2026-01-17 00:52:43.407284 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2026-01-17 00:52:43.407289 | orchestrator | Saturday 17 January 2026 00:52:19 +0000 (0:00:00.401) 0:01:58.597 ****** 2026-01-17 00:52:43.407295 | orchestrator | changed: [testbed-node-2] 2026-01-17 00:52:43.407300 | orchestrator | 2026-01-17 00:52:43.407306 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2026-01-17 00:52:43.407311 | orchestrator | Saturday 17 January 2026 00:52:26 +0000 (0:00:07.082) 0:02:05.680 ****** 2026-01-17 00:52:43.407317 | orchestrator | changed: [testbed-node-2] 2026-01-17 00:52:43.407323 | orchestrator | 2026-01-17 00:52:43.407389 | orchestrator | PLAY [Apply rabbitmq post-configuration] *************************************** 2026-01-17 00:52:43.407399 | orchestrator | 2026-01-17 00:52:43.407405 | orchestrator | TASK [Include rabbitmq post-deploy.yml] **************************************** 2026-01-17 00:52:43.407411 | orchestrator | Saturday 17 January 2026 00:52:38 +0000 (0:00:11.998) 0:02:17.678 ****** 2026-01-17 00:52:43.407417 | orchestrator | included: rabbitmq for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-17 00:52:43.407422 | orchestrator | 2026-01-17 00:52:43.407429 | orchestrator | TASK [rabbitmq : Enable all stable feature flags] ****************************** 2026-01-17 00:52:43.407435 | orchestrator | Saturday 17 January 2026 00:52:38 +0000 (0:00:00.497) 0:02:18.175 ****** 2026-01-17 00:52:43.407441 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2026-01-17 00:52:43.407447 | orchestrator | enable_outward_rabbitmq_True 2026-01-17 00:52:43.407455 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2026-01-17 00:52:43.407459 | orchestrator | outward_rabbitmq_restart 2026-01-17 00:52:43.407463 | orchestrator | ok: [testbed-node-0] 2026-01-17 00:52:43.407466 | orchestrator | ok: [testbed-node-1] 2026-01-17 00:52:43.407470 | orchestrator | ok: [testbed-node-2] 2026-01-17 00:52:43.407474 | orchestrator | 2026-01-17 00:52:43.407478 | orchestrator | PLAY [Apply role rabbitmq (outward)] ******************************************* 2026-01-17 00:52:43.407488 | orchestrator | skipping: no hosts matched 2026-01-17 00:52:43.407492 | orchestrator | 2026-01-17 00:52:43.407499 | orchestrator | PLAY [Restart rabbitmq (outward) services] ************************************* 2026-01-17 00:52:43.407503 | orchestrator | skipping: no hosts matched 2026-01-17 00:52:43.407507 | orchestrator | 2026-01-17 00:52:43.407511 | orchestrator | PLAY [Apply rabbitmq (outward) post-configuration] ***************************** 2026-01-17 00:52:43.407514 | orchestrator | skipping: no hosts matched 2026-01-17 00:52:43.407518 | orchestrator | 2026-01-17 00:52:43.407522 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-17 00:52:43.407527 | orchestrator | localhost : ok=3  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=1  2026-01-17 00:52:43.407532 | orchestrator | testbed-node-0 : ok=23  changed=14  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2026-01-17 00:52:43.407538 | orchestrator | testbed-node-1 : ok=21  changed=14  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-17 00:52:43.407544 | orchestrator | testbed-node-2 : ok=21  changed=14  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-17 00:52:43.407550 | orchestrator | 2026-01-17 00:52:43.407556 | orchestrator | 2026-01-17 00:52:43.407561 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-17 00:52:43.407567 | orchestrator | Saturday 17 January 2026 00:52:41 +0000 (0:00:02.675) 0:02:20.850 ****** 2026-01-17 00:52:43.407573 | orchestrator | =============================================================================== 2026-01-17 00:52:43.407579 | orchestrator | rabbitmq : Waiting for rabbitmq to start ------------------------------- 83.42s 2026-01-17 00:52:43.407586 | orchestrator | rabbitmq : Restart rabbitmq container ---------------------------------- 10.55s 2026-01-17 00:52:43.407592 | orchestrator | rabbitmq : Running RabbitMQ bootstrap container ------------------------- 6.65s 2026-01-17 00:52:43.407598 | orchestrator | rabbitmq : Copying over config.json files for services ------------------ 4.58s 2026-01-17 00:52:43.407605 | orchestrator | rabbitmq : Copying over rabbitmq-env.conf ------------------------------- 2.71s 2026-01-17 00:52:43.407612 | orchestrator | rabbitmq : Enable all stable feature flags ------------------------------ 2.68s 2026-01-17 00:52:43.407618 | orchestrator | Check RabbitMQ service -------------------------------------------------- 2.63s 2026-01-17 00:52:43.407624 | orchestrator | rabbitmq : Copying over erl_inetrc -------------------------------------- 2.26s 2026-01-17 00:52:43.407630 | orchestrator | rabbitmq : Copying over rabbitmq.conf ----------------------------------- 2.25s 2026-01-17 00:52:43.407637 | orchestrator | rabbitmq : Copying over advanced.config --------------------------------- 2.17s 2026-01-17 00:52:43.407641 | orchestrator | rabbitmq : Get info on RabbitMQ container ------------------------------- 1.90s 2026-01-17 00:52:43.407645 | orchestrator | rabbitmq : Check rabbitmq containers ------------------------------------ 1.84s 2026-01-17 00:52:43.407649 | orchestrator | rabbitmq : Copying over definitions.json -------------------------------- 1.60s 2026-01-17 00:52:43.407653 | orchestrator | rabbitmq : Copying over enabled_plugins --------------------------------- 1.47s 2026-01-17 00:52:43.407657 | orchestrator | rabbitmq : Creating rabbitmq volume ------------------------------------- 1.28s 2026-01-17 00:52:43.407660 | orchestrator | rabbitmq : Ensuring config directories exist ---------------------------- 1.26s 2026-01-17 00:52:43.407664 | orchestrator | rabbitmq : Get container facts ------------------------------------------ 1.25s 2026-01-17 00:52:43.407673 | orchestrator | rabbitmq : Get container facts ------------------------------------------ 1.23s 2026-01-17 00:52:43.407676 | orchestrator | rabbitmq : include_tasks ------------------------------------------------ 1.11s 2026-01-17 00:52:43.407680 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.96s 2026-01-17 00:52:46.440751 | orchestrator | 2026-01-17 00:52:46 | INFO  | Task 7002a706-1e96-40e0-a78c-3e70f3715b86 is in state STARTED 2026-01-17 00:52:46.442729 | orchestrator | 2026-01-17 00:52:46 | INFO  | Task 59e97358-19ce-494e-9924-564a00a1f55f is in state STARTED 2026-01-17 00:52:46.444858 | orchestrator | 2026-01-17 00:52:46 | INFO  | Task 1a06fc1e-dccf-48c7-9307-96e1c4567bb9 is in state STARTED 2026-01-17 00:52:46.444900 | orchestrator | 2026-01-17 00:52:46 | INFO  | Wait 1 second(s) until the next check 2026-01-17 00:52:49.484384 | orchestrator | 2026-01-17 00:52:49 | INFO  | Task 7002a706-1e96-40e0-a78c-3e70f3715b86 is in state STARTED 2026-01-17 00:52:49.485443 | orchestrator | 2026-01-17 00:52:49 | INFO  | Task 59e97358-19ce-494e-9924-564a00a1f55f is in state STARTED 2026-01-17 00:52:49.488011 | orchestrator | 2026-01-17 00:52:49 | INFO  | Task 1a06fc1e-dccf-48c7-9307-96e1c4567bb9 is in state STARTED 2026-01-17 00:52:49.488060 | orchestrator | 2026-01-17 00:52:49 | INFO  | Wait 1 second(s) until the next check 2026-01-17 00:52:52.571736 | orchestrator | 2026-01-17 00:52:52 | INFO  | Task 7002a706-1e96-40e0-a78c-3e70f3715b86 is in state STARTED 2026-01-17 00:52:52.571851 | orchestrator | 2026-01-17 00:52:52 | INFO  | Task 59e97358-19ce-494e-9924-564a00a1f55f is in state STARTED 2026-01-17 00:52:52.571867 | orchestrator | 2026-01-17 00:52:52 | INFO  | Task 1a06fc1e-dccf-48c7-9307-96e1c4567bb9 is in state STARTED 2026-01-17 00:52:52.571907 | orchestrator | 2026-01-17 00:52:52 | INFO  | Wait 1 second(s) until the next check 2026-01-17 00:52:55.568092 | orchestrator | 2026-01-17 00:52:55 | INFO  | Task 7002a706-1e96-40e0-a78c-3e70f3715b86 is in state STARTED 2026-01-17 00:52:55.568163 | orchestrator | 2026-01-17 00:52:55 | INFO  | Task 59e97358-19ce-494e-9924-564a00a1f55f is in state STARTED 2026-01-17 00:52:55.568171 | orchestrator | 2026-01-17 00:52:55 | INFO  | Task 1a06fc1e-dccf-48c7-9307-96e1c4567bb9 is in state STARTED 2026-01-17 00:52:55.568177 | orchestrator | 2026-01-17 00:52:55 | INFO  | Wait 1 second(s) until the next check 2026-01-17 00:52:58.605182 | orchestrator | 2026-01-17 00:52:58 | INFO  | Task 7002a706-1e96-40e0-a78c-3e70f3715b86 is in state STARTED 2026-01-17 00:52:58.608076 | orchestrator | 2026-01-17 00:52:58 | INFO  | Task 59e97358-19ce-494e-9924-564a00a1f55f is in state STARTED 2026-01-17 00:52:58.610507 | orchestrator | 2026-01-17 00:52:58 | INFO  | Task 1a06fc1e-dccf-48c7-9307-96e1c4567bb9 is in state STARTED 2026-01-17 00:52:58.610579 | orchestrator | 2026-01-17 00:52:58 | INFO  | Wait 1 second(s) until the next check 2026-01-17 00:53:01.660957 | orchestrator | 2026-01-17 00:53:01 | INFO  | Task 7002a706-1e96-40e0-a78c-3e70f3715b86 is in state STARTED 2026-01-17 00:53:01.664445 | orchestrator | 2026-01-17 00:53:01 | INFO  | Task 59e97358-19ce-494e-9924-564a00a1f55f is in state STARTED 2026-01-17 00:53:01.666816 | orchestrator | 2026-01-17 00:53:01 | INFO  | Task 1a06fc1e-dccf-48c7-9307-96e1c4567bb9 is in state STARTED 2026-01-17 00:53:01.667242 | orchestrator | 2026-01-17 00:53:01 | INFO  | Wait 1 second(s) until the next check 2026-01-17 00:53:04.729714 | orchestrator | 2026-01-17 00:53:04 | INFO  | Task 7002a706-1e96-40e0-a78c-3e70f3715b86 is in state STARTED 2026-01-17 00:53:04.730752 | orchestrator | 2026-01-17 00:53:04 | INFO  | Task 59e97358-19ce-494e-9924-564a00a1f55f is in state STARTED 2026-01-17 00:53:04.733439 | orchestrator | 2026-01-17 00:53:04 | INFO  | Task 1a06fc1e-dccf-48c7-9307-96e1c4567bb9 is in state STARTED 2026-01-17 00:53:04.733503 | orchestrator | 2026-01-17 00:53:04 | INFO  | Wait 1 second(s) until the next check 2026-01-17 00:53:07.767891 | orchestrator | 2026-01-17 00:53:07 | INFO  | Task 7002a706-1e96-40e0-a78c-3e70f3715b86 is in state STARTED 2026-01-17 00:53:07.768539 | orchestrator | 2026-01-17 00:53:07 | INFO  | Task 59e97358-19ce-494e-9924-564a00a1f55f is in state STARTED 2026-01-17 00:53:07.769700 | orchestrator | 2026-01-17 00:53:07 | INFO  | Task 1a06fc1e-dccf-48c7-9307-96e1c4567bb9 is in state STARTED 2026-01-17 00:53:07.769830 | orchestrator | 2026-01-17 00:53:07 | INFO  | Wait 1 second(s) until the next check 2026-01-17 00:53:10.807561 | orchestrator | 2026-01-17 00:53:10 | INFO  | Task 7002a706-1e96-40e0-a78c-3e70f3715b86 is in state STARTED 2026-01-17 00:53:10.809001 | orchestrator | 2026-01-17 00:53:10 | INFO  | Task 59e97358-19ce-494e-9924-564a00a1f55f is in state STARTED 2026-01-17 00:53:10.810994 | orchestrator | 2026-01-17 00:53:10 | INFO  | Task 1a06fc1e-dccf-48c7-9307-96e1c4567bb9 is in state STARTED 2026-01-17 00:53:10.811039 | orchestrator | 2026-01-17 00:53:10 | INFO  | Wait 1 second(s) until the next check 2026-01-17 00:53:13.845927 | orchestrator | 2026-01-17 00:53:13 | INFO  | Task 7002a706-1e96-40e0-a78c-3e70f3715b86 is in state STARTED 2026-01-17 00:53:13.846505 | orchestrator | 2026-01-17 00:53:13 | INFO  | Task 59e97358-19ce-494e-9924-564a00a1f55f is in state STARTED 2026-01-17 00:53:13.848403 | orchestrator | 2026-01-17 00:53:13 | INFO  | Task 1a06fc1e-dccf-48c7-9307-96e1c4567bb9 is in state STARTED 2026-01-17 00:53:13.848444 | orchestrator | 2026-01-17 00:53:13 | INFO  | Wait 1 second(s) until the next check 2026-01-17 00:53:16.897576 | orchestrator | 2026-01-17 00:53:16 | INFO  | Task 7002a706-1e96-40e0-a78c-3e70f3715b86 is in state STARTED 2026-01-17 00:53:16.899078 | orchestrator | 2026-01-17 00:53:16 | INFO  | Task 59e97358-19ce-494e-9924-564a00a1f55f is in state STARTED 2026-01-17 00:53:16.899112 | orchestrator | 2026-01-17 00:53:16 | INFO  | Task 1a06fc1e-dccf-48c7-9307-96e1c4567bb9 is in state STARTED 2026-01-17 00:53:16.899120 | orchestrator | 2026-01-17 00:53:16 | INFO  | Wait 1 second(s) until the next check 2026-01-17 00:53:19.933802 | orchestrator | 2026-01-17 00:53:19 | INFO  | Task 7002a706-1e96-40e0-a78c-3e70f3715b86 is in state STARTED 2026-01-17 00:53:19.934519 | orchestrator | 2026-01-17 00:53:19 | INFO  | Task 59e97358-19ce-494e-9924-564a00a1f55f is in state STARTED 2026-01-17 00:53:19.935872 | orchestrator | 2026-01-17 00:53:19 | INFO  | Task 1a06fc1e-dccf-48c7-9307-96e1c4567bb9 is in state STARTED 2026-01-17 00:53:19.936699 | orchestrator | 2026-01-17 00:53:19 | INFO  | Wait 1 second(s) until the next check 2026-01-17 00:53:22.979158 | orchestrator | 2026-01-17 00:53:22 | INFO  | Task 7002a706-1e96-40e0-a78c-3e70f3715b86 is in state STARTED 2026-01-17 00:53:22.980213 | orchestrator | 2026-01-17 00:53:22 | INFO  | Task 59e97358-19ce-494e-9924-564a00a1f55f is in state STARTED 2026-01-17 00:53:22.982482 | orchestrator | 2026-01-17 00:53:22 | INFO  | Task 1a06fc1e-dccf-48c7-9307-96e1c4567bb9 is in state STARTED 2026-01-17 00:53:22.982544 | orchestrator | 2026-01-17 00:53:22 | INFO  | Wait 1 second(s) until the next check 2026-01-17 00:53:26.035254 | orchestrator | 2026-01-17 00:53:26 | INFO  | Task 7002a706-1e96-40e0-a78c-3e70f3715b86 is in state STARTED 2026-01-17 00:53:26.035380 | orchestrator | 2026-01-17 00:53:26 | INFO  | Task 59e97358-19ce-494e-9924-564a00a1f55f is in state STARTED 2026-01-17 00:53:26.035390 | orchestrator | 2026-01-17 00:53:26 | INFO  | Task 1a06fc1e-dccf-48c7-9307-96e1c4567bb9 is in state STARTED 2026-01-17 00:53:26.035402 | orchestrator | 2026-01-17 00:53:26 | INFO  | Wait 1 second(s) until the next check 2026-01-17 00:53:29.066778 | orchestrator | 2026-01-17 00:53:29 | INFO  | Task 7002a706-1e96-40e0-a78c-3e70f3715b86 is in state STARTED 2026-01-17 00:53:29.067101 | orchestrator | 2026-01-17 00:53:29 | INFO  | Task 59e97358-19ce-494e-9924-564a00a1f55f is in state STARTED 2026-01-17 00:53:29.073010 | orchestrator | 2026-01-17 00:53:29 | INFO  | Task 1a06fc1e-dccf-48c7-9307-96e1c4567bb9 is in state STARTED 2026-01-17 00:53:29.073069 | orchestrator | 2026-01-17 00:53:29 | INFO  | Wait 1 second(s) until the next check 2026-01-17 00:53:32.127321 | orchestrator | 2026-01-17 00:53:32 | INFO  | Task 7002a706-1e96-40e0-a78c-3e70f3715b86 is in state STARTED 2026-01-17 00:53:32.127479 | orchestrator | 2026-01-17 00:53:32 | INFO  | Task 59e97358-19ce-494e-9924-564a00a1f55f is in state STARTED 2026-01-17 00:53:32.128152 | orchestrator | 2026-01-17 00:53:32 | INFO  | Task 1a06fc1e-dccf-48c7-9307-96e1c4567bb9 is in state STARTED 2026-01-17 00:53:32.128742 | orchestrator | 2026-01-17 00:53:32 | INFO  | Wait 1 second(s) until the next check 2026-01-17 00:53:35.177189 | orchestrator | 2026-01-17 00:53:35 | INFO  | Task 7002a706-1e96-40e0-a78c-3e70f3715b86 is in state STARTED 2026-01-17 00:53:35.179050 | orchestrator | 2026-01-17 00:53:35 | INFO  | Task 59e97358-19ce-494e-9924-564a00a1f55f is in state STARTED 2026-01-17 00:53:35.179800 | orchestrator | 2026-01-17 00:53:35 | INFO  | Task 1a06fc1e-dccf-48c7-9307-96e1c4567bb9 is in state STARTED 2026-01-17 00:53:35.179831 | orchestrator | 2026-01-17 00:53:35 | INFO  | Wait 1 second(s) until the next check 2026-01-17 00:53:38.235561 | orchestrator | 2026-01-17 00:53:38 | INFO  | Task 7002a706-1e96-40e0-a78c-3e70f3715b86 is in state STARTED 2026-01-17 00:53:38.237539 | orchestrator | 2026-01-17 00:53:38 | INFO  | Task 59e97358-19ce-494e-9924-564a00a1f55f is in state STARTED 2026-01-17 00:53:38.240466 | orchestrator | 2026-01-17 00:53:38 | INFO  | Task 1a06fc1e-dccf-48c7-9307-96e1c4567bb9 is in state STARTED 2026-01-17 00:53:38.240854 | orchestrator | 2026-01-17 00:53:38 | INFO  | Wait 1 second(s) until the next check 2026-01-17 00:53:41.277038 | orchestrator | 2026-01-17 00:53:41 | INFO  | Task 7002a706-1e96-40e0-a78c-3e70f3715b86 is in state STARTED 2026-01-17 00:53:41.279190 | orchestrator | 2026-01-17 00:53:41 | INFO  | Task 59e97358-19ce-494e-9924-564a00a1f55f is in state STARTED 2026-01-17 00:53:41.280815 | orchestrator | 2026-01-17 00:53:41 | INFO  | Task 1a06fc1e-dccf-48c7-9307-96e1c4567bb9 is in state STARTED 2026-01-17 00:53:41.280872 | orchestrator | 2026-01-17 00:53:41 | INFO  | Wait 1 second(s) until the next check 2026-01-17 00:53:44.327196 | orchestrator | 2026-01-17 00:53:44 | INFO  | Task 7002a706-1e96-40e0-a78c-3e70f3715b86 is in state STARTED 2026-01-17 00:53:44.329441 | orchestrator | 2026-01-17 00:53:44 | INFO  | Task 59e97358-19ce-494e-9924-564a00a1f55f is in state SUCCESS 2026-01-17 00:53:44.329523 | orchestrator | 2026-01-17 00:53:44.331040 | orchestrator | 2026-01-17 00:53:44.331082 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-17 00:53:44.331089 | orchestrator | 2026-01-17 00:53:44.331096 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-01-17 00:53:44.331103 | orchestrator | Saturday 17 January 2026 00:51:09 +0000 (0:00:00.195) 0:00:00.195 ****** 2026-01-17 00:53:44.331139 | orchestrator | ok: [testbed-node-3] 2026-01-17 00:53:44.331147 | orchestrator | ok: [testbed-node-4] 2026-01-17 00:53:44.331153 | orchestrator | ok: [testbed-node-5] 2026-01-17 00:53:44.331159 | orchestrator | ok: [testbed-node-0] 2026-01-17 00:53:44.331165 | orchestrator | ok: [testbed-node-1] 2026-01-17 00:53:44.331171 | orchestrator | ok: [testbed-node-2] 2026-01-17 00:53:44.331225 | orchestrator | 2026-01-17 00:53:44.331233 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-17 00:53:44.331296 | orchestrator | Saturday 17 January 2026 00:51:10 +0000 (0:00:00.737) 0:00:00.933 ****** 2026-01-17 00:53:44.331482 | orchestrator | ok: [testbed-node-3] => (item=enable_ovn_True) 2026-01-17 00:53:44.331493 | orchestrator | ok: [testbed-node-4] => (item=enable_ovn_True) 2026-01-17 00:53:44.331517 | orchestrator | ok: [testbed-node-5] => (item=enable_ovn_True) 2026-01-17 00:53:44.331523 | orchestrator | ok: [testbed-node-0] => (item=enable_ovn_True) 2026-01-17 00:53:44.331529 | orchestrator | ok: [testbed-node-1] => (item=enable_ovn_True) 2026-01-17 00:53:44.331535 | orchestrator | ok: [testbed-node-2] => (item=enable_ovn_True) 2026-01-17 00:53:44.331540 | orchestrator | 2026-01-17 00:53:44.331546 | orchestrator | PLAY [Apply role ovn-controller] *********************************************** 2026-01-17 00:53:44.331552 | orchestrator | 2026-01-17 00:53:44.331557 | orchestrator | TASK [ovn-controller : include_tasks] ****************************************** 2026-01-17 00:53:44.331563 | orchestrator | Saturday 17 January 2026 00:51:11 +0000 (0:00:01.075) 0:00:02.008 ****** 2026-01-17 00:53:44.331570 | orchestrator | included: /ansible/roles/ovn-controller/tasks/deploy.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-17 00:53:44.331577 | orchestrator | 2026-01-17 00:53:44.331582 | orchestrator | TASK [ovn-controller : Ensuring config directories exist] ********************** 2026-01-17 00:53:44.331588 | orchestrator | Saturday 17 January 2026 00:51:12 +0000 (0:00:01.143) 0:00:03.152 ****** 2026-01-17 00:53:44.331596 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-17 00:53:44.331606 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-17 00:53:44.331612 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-17 00:53:44.331618 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-17 00:53:44.331624 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-17 00:53:44.331630 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-17 00:53:44.331636 | orchestrator | 2026-01-17 00:53:44.331651 | orchestrator | TASK [ovn-controller : Copying over config.json files for services] ************ 2026-01-17 00:53:44.331657 | orchestrator | Saturday 17 January 2026 00:51:13 +0000 (0:00:01.672) 0:00:04.824 ****** 2026-01-17 00:53:44.331668 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-17 00:53:44.331679 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-17 00:53:44.331685 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-17 00:53:44.331692 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-17 00:53:44.331698 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-17 00:53:44.331704 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-17 00:53:44.331709 | orchestrator | 2026-01-17 00:53:44.331715 | orchestrator | TASK [ovn-controller : Ensuring systemd override directory exists] ************* 2026-01-17 00:53:44.331721 | orchestrator | Saturday 17 January 2026 00:51:15 +0000 (0:00:01.993) 0:00:06.817 ****** 2026-01-17 00:53:44.331727 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-17 00:53:44.331733 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-17 00:53:44.331746 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-17 00:53:44.331757 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-17 00:53:44.331766 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-17 00:53:44.331772 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-17 00:53:44.331778 | orchestrator | 2026-01-17 00:53:44.331784 | orchestrator | TASK [ovn-controller : Copying over systemd override] ************************** 2026-01-17 00:53:44.331789 | orchestrator | Saturday 17 January 2026 00:51:17 +0000 (0:00:01.254) 0:00:08.071 ****** 2026-01-17 00:53:44.331795 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-17 00:53:44.331801 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-17 00:53:44.331807 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-17 00:53:44.331813 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-17 00:53:44.331819 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-17 00:53:44.331835 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-17 00:53:44.331841 | orchestrator | 2026-01-17 00:53:44.331851 | orchestrator | TASK [ovn-controller : Check ovn-controller containers] ************************ 2026-01-17 00:53:44.331857 | orchestrator | Saturday 17 January 2026 00:51:18 +0000 (0:00:01.784) 0:00:09.856 ****** 2026-01-17 00:53:44.331863 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-17 00:53:44.331872 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-17 00:53:44.331879 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-17 00:53:44.331884 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-17 00:53:44.331890 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-17 00:53:44.331896 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-17 00:53:44.331902 | orchestrator | 2026-01-17 00:53:44.331908 | orchestrator | TASK [ovn-controller : Create br-int bridge on OpenvSwitch] ******************** 2026-01-17 00:53:44.331914 | orchestrator | Saturday 17 January 2026 00:51:20 +0000 (0:00:01.637) 0:00:11.494 ****** 2026-01-17 00:53:44.331920 | orchestrator | changed: [testbed-node-4] 2026-01-17 00:53:44.331926 | orchestrator | changed: [testbed-node-3] 2026-01-17 00:53:44.331931 | orchestrator | changed: [testbed-node-5] 2026-01-17 00:53:44.331937 | orchestrator | changed: [testbed-node-0] 2026-01-17 00:53:44.331947 | orchestrator | changed: [testbed-node-1] 2026-01-17 00:53:44.331953 | orchestrator | changed: [testbed-node-2] 2026-01-17 00:53:44.331958 | orchestrator | 2026-01-17 00:53:44.331964 | orchestrator | TASK [ovn-controller : Configure OVN in OVSDB] ********************************* 2026-01-17 00:53:44.331970 | orchestrator | Saturday 17 January 2026 00:51:23 +0000 (0:00:02.995) 0:00:14.489 ****** 2026-01-17 00:53:44.331976 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.13'}) 2026-01-17 00:53:44.331982 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.14'}) 2026-01-17 00:53:44.331988 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.15'}) 2026-01-17 00:53:44.331993 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.10'}) 2026-01-17 00:53:44.331999 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.11'}) 2026-01-17 00:53:44.332005 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-01-17 00:53:44.332010 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-01-17 00:53:44.332016 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.12'}) 2026-01-17 00:53:44.332026 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-01-17 00:53:44.332032 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-01-17 00:53:44.332037 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-01-17 00:53:44.332043 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-01-17 00:53:44.332050 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-01-17 00:53:44.332059 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-01-17 00:53:44.332065 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-01-17 00:53:44.332071 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-01-17 00:53:44.332077 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-01-17 00:53:44.332083 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-01-17 00:53:44.332089 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-01-17 00:53:44.332095 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-01-17 00:53:44.332100 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-01-17 00:53:44.332106 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-01-17 00:53:44.332112 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-01-17 00:53:44.332117 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-01-17 00:53:44.332123 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-01-17 00:53:44.332129 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-01-17 00:53:44.332134 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-01-17 00:53:44.332144 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-01-17 00:53:44.332150 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-01-17 00:53:44.332156 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-01-17 00:53:44.332161 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-01-17 00:53:44.332167 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-01-17 00:53:44.332173 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-01-17 00:53:44.332179 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-01-17 00:53:44.332184 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-01-17 00:53:44.332190 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2026-01-17 00:53:44.332196 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2026-01-17 00:53:44.332202 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-01-17 00:53:44.332207 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2026-01-17 00:53:44.332213 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2026-01-17 00:53:44.332219 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2026-01-17 00:53:44.332224 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:2f:fa:44', 'state': 'present'}) 2026-01-17 00:53:44.332231 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:89:18:56', 'state': 'present'}) 2026-01-17 00:53:44.332236 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2026-01-17 00:53:44.332246 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:71:3a:c3', 'state': 'present'}) 2026-01-17 00:53:44.332252 | orchestrator | ok: [testbed-node-0] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:52:c1:40', 'state': 'absent'}) 2026-01-17 00:53:44.332324 | orchestrator | ok: [testbed-node-1] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:33:12:50', 'state': 'absent'}) 2026-01-17 00:53:44.332334 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2026-01-17 00:53:44.332344 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2026-01-17 00:53:44.332358 | orchestrator | ok: [testbed-node-2] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:29:4a:9b', 'state': 'absent'}) 2026-01-17 00:53:44.332368 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2026-01-17 00:53:44.332378 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2026-01-17 00:53:44.332465 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2026-01-17 00:53:44.332478 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2026-01-17 00:53:44.332487 | orchestrator | 2026-01-17 00:53:44.332497 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-01-17 00:53:44.332513 | orchestrator | Saturday 17 January 2026 00:51:47 +0000 (0:00:23.599) 0:00:38.088 ****** 2026-01-17 00:53:44.332523 | orchestrator | 2026-01-17 00:53:44.332532 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-01-17 00:53:44.332542 | orchestrator | Saturday 17 January 2026 00:51:47 +0000 (0:00:00.057) 0:00:38.146 ****** 2026-01-17 00:53:44.332552 | orchestrator | 2026-01-17 00:53:44.332562 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-01-17 00:53:44.332571 | orchestrator | Saturday 17 January 2026 00:51:47 +0000 (0:00:00.061) 0:00:38.207 ****** 2026-01-17 00:53:44.332581 | orchestrator | 2026-01-17 00:53:44.332591 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-01-17 00:53:44.332601 | orchestrator | Saturday 17 January 2026 00:51:47 +0000 (0:00:00.061) 0:00:38.269 ****** 2026-01-17 00:53:44.332611 | orchestrator | 2026-01-17 00:53:44.332621 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-01-17 00:53:44.332630 | orchestrator | Saturday 17 January 2026 00:51:47 +0000 (0:00:00.060) 0:00:38.329 ****** 2026-01-17 00:53:44.332640 | orchestrator | 2026-01-17 00:53:44.332649 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-01-17 00:53:44.332659 | orchestrator | Saturday 17 January 2026 00:51:47 +0000 (0:00:00.060) 0:00:38.389 ****** 2026-01-17 00:53:44.332667 | orchestrator | 2026-01-17 00:53:44.332675 | orchestrator | RUNNING HANDLER [ovn-controller : Reload systemd config] *********************** 2026-01-17 00:53:44.332683 | orchestrator | Saturday 17 January 2026 00:51:47 +0000 (0:00:00.060) 0:00:38.449 ****** 2026-01-17 00:53:44.332691 | orchestrator | ok: [testbed-node-3] 2026-01-17 00:53:44.332700 | orchestrator | ok: [testbed-node-5] 2026-01-17 00:53:44.332709 | orchestrator | ok: [testbed-node-0] 2026-01-17 00:53:44.332719 | orchestrator | ok: [testbed-node-4] 2026-01-17 00:53:44.332729 | orchestrator | ok: [testbed-node-1] 2026-01-17 00:53:44.332739 | orchestrator | ok: [testbed-node-2] 2026-01-17 00:53:44.332747 | orchestrator | 2026-01-17 00:53:44.332757 | orchestrator | RUNNING HANDLER [ovn-controller : Restart ovn-controller container] ************ 2026-01-17 00:53:44.332766 | orchestrator | Saturday 17 January 2026 00:51:50 +0000 (0:00:02.858) 0:00:41.308 ****** 2026-01-17 00:53:44.332776 | orchestrator | changed: [testbed-node-0] 2026-01-17 00:53:44.332785 | orchestrator | changed: [testbed-node-1] 2026-01-17 00:53:44.332793 | orchestrator | changed: [testbed-node-2] 2026-01-17 00:53:44.332802 | orchestrator | changed: [testbed-node-3] 2026-01-17 00:53:44.332812 | orchestrator | changed: [testbed-node-4] 2026-01-17 00:53:44.332822 | orchestrator | changed: [testbed-node-5] 2026-01-17 00:53:44.332832 | orchestrator | 2026-01-17 00:53:44.332841 | orchestrator | PLAY [Apply role ovn-db] ******************************************************* 2026-01-17 00:53:44.332849 | orchestrator | 2026-01-17 00:53:44.332858 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2026-01-17 00:53:44.332867 | orchestrator | Saturday 17 January 2026 00:52:19 +0000 (0:00:28.658) 0:01:09.967 ****** 2026-01-17 00:53:44.332876 | orchestrator | included: /ansible/roles/ovn-db/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-17 00:53:44.332887 | orchestrator | 2026-01-17 00:53:44.332898 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2026-01-17 00:53:44.332908 | orchestrator | Saturday 17 January 2026 00:52:20 +0000 (0:00:01.080) 0:01:11.048 ****** 2026-01-17 00:53:44.332917 | orchestrator | included: /ansible/roles/ovn-db/tasks/lookup_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-17 00:53:44.332926 | orchestrator | 2026-01-17 00:53:44.332935 | orchestrator | TASK [ovn-db : Checking for any existing OVN DB container volumes] ************* 2026-01-17 00:53:44.332945 | orchestrator | Saturday 17 January 2026 00:52:20 +0000 (0:00:00.541) 0:01:11.589 ****** 2026-01-17 00:53:44.332955 | orchestrator | ok: [testbed-node-0] 2026-01-17 00:53:44.332965 | orchestrator | ok: [testbed-node-1] 2026-01-17 00:53:44.332975 | orchestrator | ok: [testbed-node-2] 2026-01-17 00:53:44.332984 | orchestrator | 2026-01-17 00:53:44.332993 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB volume availability] *************** 2026-01-17 00:53:44.333010 | orchestrator | Saturday 17 January 2026 00:52:21 +0000 (0:00:01.170) 0:01:12.759 ****** 2026-01-17 00:53:44.333020 | orchestrator | ok: [testbed-node-0] 2026-01-17 00:53:44.333029 | orchestrator | ok: [testbed-node-1] 2026-01-17 00:53:44.333039 | orchestrator | ok: [testbed-node-2] 2026-01-17 00:53:44.333049 | orchestrator | 2026-01-17 00:53:44.333070 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB volume availability] *************** 2026-01-17 00:53:44.333080 | orchestrator | Saturday 17 January 2026 00:52:22 +0000 (0:00:00.432) 0:01:13.192 ****** 2026-01-17 00:53:44.333090 | orchestrator | ok: [testbed-node-0] 2026-01-17 00:53:44.333100 | orchestrator | ok: [testbed-node-1] 2026-01-17 00:53:44.333110 | orchestrator | ok: [testbed-node-2] 2026-01-17 00:53:44.333120 | orchestrator | 2026-01-17 00:53:44.333130 | orchestrator | TASK [ovn-db : Establish whether the OVN NB cluster has already existed] ******* 2026-01-17 00:53:44.333140 | orchestrator | Saturday 17 January 2026 00:52:22 +0000 (0:00:00.413) 0:01:13.605 ****** 2026-01-17 00:53:44.333150 | orchestrator | ok: [testbed-node-0] 2026-01-17 00:53:44.333160 | orchestrator | ok: [testbed-node-1] 2026-01-17 00:53:44.333171 | orchestrator | ok: [testbed-node-2] 2026-01-17 00:53:44.333182 | orchestrator | 2026-01-17 00:53:44.333199 | orchestrator | TASK [ovn-db : Establish whether the OVN SB cluster has already existed] ******* 2026-01-17 00:53:44.333209 | orchestrator | Saturday 17 January 2026 00:52:23 +0000 (0:00:00.379) 0:01:13.985 ****** 2026-01-17 00:53:44.333219 | orchestrator | ok: [testbed-node-0] 2026-01-17 00:53:44.333229 | orchestrator | ok: [testbed-node-1] 2026-01-17 00:53:44.333239 | orchestrator | ok: [testbed-node-2] 2026-01-17 00:53:44.333249 | orchestrator | 2026-01-17 00:53:44.333280 | orchestrator | TASK [ovn-db : Check if running on all OVN NB DB hosts] ************************ 2026-01-17 00:53:44.333290 | orchestrator | Saturday 17 January 2026 00:52:23 +0000 (0:00:00.566) 0:01:14.551 ****** 2026-01-17 00:53:44.333300 | orchestrator | skipping: [testbed-node-0] 2026-01-17 00:53:44.333310 | orchestrator | skipping: [testbed-node-1] 2026-01-17 00:53:44.333319 | orchestrator | skipping: [testbed-node-2] 2026-01-17 00:53:44.333329 | orchestrator | 2026-01-17 00:53:44.333338 | orchestrator | TASK [ovn-db : Check OVN NB service port liveness] ***************************** 2026-01-17 00:53:44.333349 | orchestrator | Saturday 17 January 2026 00:52:24 +0000 (0:00:00.366) 0:01:14.918 ****** 2026-01-17 00:53:44.333359 | orchestrator | skipping: [testbed-node-0] 2026-01-17 00:53:44.333369 | orchestrator | skipping: [testbed-node-1] 2026-01-17 00:53:44.333378 | orchestrator | skipping: [testbed-node-2] 2026-01-17 00:53:44.333387 | orchestrator | 2026-01-17 00:53:44.333396 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB service port liveness] ************* 2026-01-17 00:53:44.333406 | orchestrator | Saturday 17 January 2026 00:52:24 +0000 (0:00:00.317) 0:01:15.236 ****** 2026-01-17 00:53:44.333416 | orchestrator | skipping: [testbed-node-0] 2026-01-17 00:53:44.333425 | orchestrator | skipping: [testbed-node-1] 2026-01-17 00:53:44.333435 | orchestrator | skipping: [testbed-node-2] 2026-01-17 00:53:44.333445 | orchestrator | 2026-01-17 00:53:44.333455 | orchestrator | TASK [ovn-db : Get OVN NB database information] ******************************** 2026-01-17 00:53:44.333464 | orchestrator | Saturday 17 January 2026 00:52:24 +0000 (0:00:00.310) 0:01:15.546 ****** 2026-01-17 00:53:44.333474 | orchestrator | skipping: [testbed-node-0] 2026-01-17 00:53:44.333483 | orchestrator | skipping: [testbed-node-1] 2026-01-17 00:53:44.333492 | orchestrator | skipping: [testbed-node-2] 2026-01-17 00:53:44.333502 | orchestrator | 2026-01-17 00:53:44.333512 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB leader/follower role] ************** 2026-01-17 00:53:44.333523 | orchestrator | Saturday 17 January 2026 00:52:25 +0000 (0:00:00.552) 0:01:16.099 ****** 2026-01-17 00:53:44.333532 | orchestrator | skipping: [testbed-node-0] 2026-01-17 00:53:44.333541 | orchestrator | skipping: [testbed-node-1] 2026-01-17 00:53:44.333551 | orchestrator | skipping: [testbed-node-2] 2026-01-17 00:53:44.333561 | orchestrator | 2026-01-17 00:53:44.333572 | orchestrator | TASK [ovn-db : Fail on existing OVN NB cluster with no leader] ***************** 2026-01-17 00:53:44.333590 | orchestrator | Saturday 17 January 2026 00:52:25 +0000 (0:00:00.333) 0:01:16.432 ****** 2026-01-17 00:53:44.333599 | orchestrator | skipping: [testbed-node-0] 2026-01-17 00:53:44.333609 | orchestrator | skipping: [testbed-node-1] 2026-01-17 00:53:44.333619 | orchestrator | skipping: [testbed-node-2] 2026-01-17 00:53:44.333627 | orchestrator | 2026-01-17 00:53:44.333637 | orchestrator | TASK [ovn-db : Check if running on all OVN SB DB hosts] ************************ 2026-01-17 00:53:44.333648 | orchestrator | Saturday 17 January 2026 00:52:25 +0000 (0:00:00.288) 0:01:16.721 ****** 2026-01-17 00:53:44.333659 | orchestrator | skipping: [testbed-node-0] 2026-01-17 00:53:44.333669 | orchestrator | skipping: [testbed-node-1] 2026-01-17 00:53:44.333679 | orchestrator | skipping: [testbed-node-2] 2026-01-17 00:53:44.333708 | orchestrator | 2026-01-17 00:53:44.333718 | orchestrator | TASK [ovn-db : Check OVN SB service port liveness] ***************************** 2026-01-17 00:53:44.333727 | orchestrator | Saturday 17 January 2026 00:52:26 +0000 (0:00:00.383) 0:01:17.105 ****** 2026-01-17 00:53:44.333736 | orchestrator | skipping: [testbed-node-0] 2026-01-17 00:53:44.333745 | orchestrator | skipping: [testbed-node-1] 2026-01-17 00:53:44.333754 | orchestrator | skipping: [testbed-node-2] 2026-01-17 00:53:44.333764 | orchestrator | 2026-01-17 00:53:44.333774 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB service port liveness] ************* 2026-01-17 00:53:44.333784 | orchestrator | Saturday 17 January 2026 00:52:26 +0000 (0:00:00.560) 0:01:17.665 ****** 2026-01-17 00:53:44.333793 | orchestrator | skipping: [testbed-node-0] 2026-01-17 00:53:44.333802 | orchestrator | skipping: [testbed-node-1] 2026-01-17 00:53:44.333811 | orchestrator | skipping: [testbed-node-2] 2026-01-17 00:53:44.333820 | orchestrator | 2026-01-17 00:53:44.333829 | orchestrator | TASK [ovn-db : Get OVN SB database information] ******************************** 2026-01-17 00:53:44.333838 | orchestrator | Saturday 17 January 2026 00:52:27 +0000 (0:00:00.325) 0:01:17.991 ****** 2026-01-17 00:53:44.333848 | orchestrator | skipping: [testbed-node-0] 2026-01-17 00:53:44.333858 | orchestrator | skipping: [testbed-node-1] 2026-01-17 00:53:44.333868 | orchestrator | skipping: [testbed-node-2] 2026-01-17 00:53:44.333877 | orchestrator | 2026-01-17 00:53:44.333886 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB leader/follower role] ************** 2026-01-17 00:53:44.333895 | orchestrator | Saturday 17 January 2026 00:52:27 +0000 (0:00:00.298) 0:01:18.289 ****** 2026-01-17 00:53:44.333904 | orchestrator | skipping: [testbed-node-0] 2026-01-17 00:53:44.333913 | orchestrator | skipping: [testbed-node-1] 2026-01-17 00:53:44.333922 | orchestrator | skipping: [testbed-node-2] 2026-01-17 00:53:44.333931 | orchestrator | 2026-01-17 00:53:44.333940 | orchestrator | TASK [ovn-db : Fail on existing OVN SB cluster with no leader] ***************** 2026-01-17 00:53:44.333951 | orchestrator | Saturday 17 January 2026 00:52:27 +0000 (0:00:00.291) 0:01:18.580 ****** 2026-01-17 00:53:44.333961 | orchestrator | skipping: [testbed-node-0] 2026-01-17 00:53:44.333970 | orchestrator | skipping: [testbed-node-1] 2026-01-17 00:53:44.333989 | orchestrator | skipping: [testbed-node-2] 2026-01-17 00:53:44.333999 | orchestrator | 2026-01-17 00:53:44.334008 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2026-01-17 00:53:44.334072 | orchestrator | Saturday 17 January 2026 00:52:28 +0000 (0:00:00.302) 0:01:18.883 ****** 2026-01-17 00:53:44.334083 | orchestrator | included: /ansible/roles/ovn-db/tasks/bootstrap-initial.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-17 00:53:44.334092 | orchestrator | 2026-01-17 00:53:44.334102 | orchestrator | TASK [ovn-db : Set bootstrap args fact for NB (new cluster)] ******************* 2026-01-17 00:53:44.334112 | orchestrator | Saturday 17 January 2026 00:52:28 +0000 (0:00:00.760) 0:01:19.644 ****** 2026-01-17 00:53:44.334122 | orchestrator | ok: [testbed-node-0] 2026-01-17 00:53:44.334133 | orchestrator | ok: [testbed-node-1] 2026-01-17 00:53:44.334142 | orchestrator | ok: [testbed-node-2] 2026-01-17 00:53:44.334152 | orchestrator | 2026-01-17 00:53:44.334162 | orchestrator | TASK [ovn-db : Set bootstrap args fact for SB (new cluster)] ******************* 2026-01-17 00:53:44.334171 | orchestrator | Saturday 17 January 2026 00:52:29 +0000 (0:00:00.483) 0:01:20.127 ****** 2026-01-17 00:53:44.334189 | orchestrator | ok: [testbed-node-0] 2026-01-17 00:53:44.334200 | orchestrator | ok: [testbed-node-1] 2026-01-17 00:53:44.334211 | orchestrator | ok: [testbed-node-2] 2026-01-17 00:53:44.334220 | orchestrator | 2026-01-17 00:53:44.334230 | orchestrator | TASK [ovn-db : Check NB cluster status] **************************************** 2026-01-17 00:53:44.334239 | orchestrator | Saturday 17 January 2026 00:52:29 +0000 (0:00:00.498) 0:01:20.625 ****** 2026-01-17 00:53:44.334249 | orchestrator | skipping: [testbed-node-0] 2026-01-17 00:53:44.334311 | orchestrator | skipping: [testbed-node-1] 2026-01-17 00:53:44.334323 | orchestrator | skipping: [testbed-node-2] 2026-01-17 00:53:44.334333 | orchestrator | 2026-01-17 00:53:44.334342 | orchestrator | TASK [ovn-db : Check SB cluster status] **************************************** 2026-01-17 00:53:44.334354 | orchestrator | Saturday 17 January 2026 00:52:30 +0000 (0:00:00.679) 0:01:21.305 ****** 2026-01-17 00:53:44.334501 | orchestrator | skipping: [testbed-node-0] 2026-01-17 00:53:44.334514 | orchestrator | skipping: [testbed-node-1] 2026-01-17 00:53:44.334525 | orchestrator | skipping: [testbed-node-2] 2026-01-17 00:53:44.334534 | orchestrator | 2026-01-17 00:53:44.334544 | orchestrator | TASK [ovn-db : Remove an old node with the same ip address as the new node in NB DB] *** 2026-01-17 00:53:44.334553 | orchestrator | Saturday 17 January 2026 00:52:30 +0000 (0:00:00.475) 0:01:21.781 ****** 2026-01-17 00:53:44.334563 | orchestrator | skipping: [testbed-node-0] 2026-01-17 00:53:44.334572 | orchestrator | skipping: [testbed-node-1] 2026-01-17 00:53:44.334581 | orchestrator | skipping: [testbed-node-2] 2026-01-17 00:53:44.334590 | orchestrator | 2026-01-17 00:53:44.334600 | orchestrator | TASK [ovn-db : Remove an old node with the same ip address as the new node in SB DB] *** 2026-01-17 00:53:44.334609 | orchestrator | Saturday 17 January 2026 00:52:31 +0000 (0:00:00.451) 0:01:22.232 ****** 2026-01-17 00:53:44.334653 | orchestrator | skipping: [testbed-node-0] 2026-01-17 00:53:44.334663 | orchestrator | skipping: [testbed-node-1] 2026-01-17 00:53:44.334673 | orchestrator | skipping: [testbed-node-2] 2026-01-17 00:53:44.334682 | orchestrator | 2026-01-17 00:53:44.334692 | orchestrator | TASK [ovn-db : Set bootstrap args fact for NB (new member)] ******************** 2026-01-17 00:53:44.334702 | orchestrator | Saturday 17 January 2026 00:52:31 +0000 (0:00:00.343) 0:01:22.576 ****** 2026-01-17 00:53:44.334712 | orchestrator | skipping: [testbed-node-0] 2026-01-17 00:53:44.334722 | orchestrator | skipping: [testbed-node-1] 2026-01-17 00:53:44.334731 | orchestrator | skipping: [testbed-node-2] 2026-01-17 00:53:44.334741 | orchestrator | 2026-01-17 00:53:44.334749 | orchestrator | TASK [ovn-db : Set bootstrap args fact for SB (new member)] ******************** 2026-01-17 00:53:44.334759 | orchestrator | Saturday 17 January 2026 00:52:32 +0000 (0:00:00.516) 0:01:23.093 ****** 2026-01-17 00:53:44.334769 | orchestrator | skipping: [testbed-node-0] 2026-01-17 00:53:44.334779 | orchestrator | skipping: [testbed-node-1] 2026-01-17 00:53:44.334789 | orchestrator | skipping: [testbed-node-2] 2026-01-17 00:53:44.334798 | orchestrator | 2026-01-17 00:53:44.334807 | orchestrator | TASK [ovn-db : Ensuring config directories exist] ****************************** 2026-01-17 00:53:44.334816 | orchestrator | Saturday 17 January 2026 00:52:32 +0000 (0:00:00.347) 0:01:23.440 ****** 2026-01-17 00:53:44.334829 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-17 00:53:44.334841 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-17 00:53:44.334853 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-17 00:53:44.334883 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-17 00:53:44.334898 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-17 00:53:44.334913 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-17 00:53:44.334924 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-17 00:53:44.334934 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-17 00:53:44.334944 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-17 00:53:44.334956 | orchestrator | 2026-01-17 00:53:44.334966 | orchestrator | TASK [ovn-db : Copying over config.json files for services] ******************** 2026-01-17 00:53:44.334976 | orchestrator | Saturday 17 January 2026 00:52:34 +0000 (0:00:01.541) 0:01:24.981 ****** 2026-01-17 00:53:44.334987 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-17 00:53:44.334994 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-17 00:53:44.335000 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-17 00:53:44.335012 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-17 00:53:44.335023 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-17 00:53:44.335030 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-17 00:53:44.335044 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-17 00:53:44.335050 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-17 00:53:44.335056 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-17 00:53:44.335062 | orchestrator | 2026-01-17 00:53:44.335068 | orchestrator | TASK [ovn-db : Check ovn containers] ******************************************* 2026-01-17 00:53:44.335074 | orchestrator | Saturday 17 January 2026 00:52:38 +0000 (0:00:04.216) 0:01:29.198 ****** 2026-01-17 00:53:44.335080 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-17 00:53:44.335086 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-17 00:53:44.335093 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-17 00:53:44.335103 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-17 00:53:44.335110 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-17 00:53:44.335120 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-17 00:53:44.335128 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-17 00:53:44.335137 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-17 00:53:44.335145 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-17 00:53:44.335152 | orchestrator | 2026-01-17 00:53:44.335159 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-01-17 00:53:44.335165 | orchestrator | Saturday 17 January 2026 00:52:40 +0000 (0:00:02.633) 0:01:31.832 ****** 2026-01-17 00:53:44.335172 | orchestrator | 2026-01-17 00:53:44.335179 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-01-17 00:53:44.335185 | orchestrator | Saturday 17 January 2026 00:52:41 +0000 (0:00:00.071) 0:01:31.903 ****** 2026-01-17 00:53:44.335191 | orchestrator | 2026-01-17 00:53:44.335198 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-01-17 00:53:44.335204 | orchestrator | Saturday 17 January 2026 00:52:41 +0000 (0:00:00.124) 0:01:32.028 ****** 2026-01-17 00:53:44.335210 | orchestrator | 2026-01-17 00:53:44.335217 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-nb-db container] ************************* 2026-01-17 00:53:44.335224 | orchestrator | Saturday 17 January 2026 00:52:41 +0000 (0:00:00.119) 0:01:32.147 ****** 2026-01-17 00:53:44.335231 | orchestrator | changed: [testbed-node-0] 2026-01-17 00:53:44.335237 | orchestrator | changed: [testbed-node-1] 2026-01-17 00:53:44.335244 | orchestrator | changed: [testbed-node-2] 2026-01-17 00:53:44.335251 | orchestrator | 2026-01-17 00:53:44.335298 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-sb-db container] ************************* 2026-01-17 00:53:44.335306 | orchestrator | Saturday 17 January 2026 00:52:49 +0000 (0:00:07.776) 0:01:39.924 ****** 2026-01-17 00:53:44.335317 | orchestrator | changed: [testbed-node-1] 2026-01-17 00:53:44.335324 | orchestrator | changed: [testbed-node-0] 2026-01-17 00:53:44.335331 | orchestrator | changed: [testbed-node-2] 2026-01-17 00:53:44.335337 | orchestrator | 2026-01-17 00:53:44.335344 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-northd container] ************************ 2026-01-17 00:53:44.335350 | orchestrator | Saturday 17 January 2026 00:52:56 +0000 (0:00:07.673) 0:01:47.597 ****** 2026-01-17 00:53:44.335357 | orchestrator | changed: [testbed-node-0] 2026-01-17 00:53:44.335363 | orchestrator | changed: [testbed-node-1] 2026-01-17 00:53:44.335370 | orchestrator | changed: [testbed-node-2] 2026-01-17 00:53:44.335376 | orchestrator | 2026-01-17 00:53:44.335382 | orchestrator | TASK [ovn-db : Wait for leader election] *************************************** 2026-01-17 00:53:44.335389 | orchestrator | Saturday 17 January 2026 00:53:04 +0000 (0:00:07.459) 0:01:55.057 ****** 2026-01-17 00:53:44.335395 | orchestrator | skipping: [testbed-node-0] 2026-01-17 00:53:44.335401 | orchestrator | 2026-01-17 00:53:44.335408 | orchestrator | TASK [ovn-db : Get OVN_Northbound cluster leader] ****************************** 2026-01-17 00:53:44.335415 | orchestrator | Saturday 17 January 2026 00:53:04 +0000 (0:00:00.357) 0:01:55.414 ****** 2026-01-17 00:53:44.335421 | orchestrator | ok: [testbed-node-1] 2026-01-17 00:53:44.335428 | orchestrator | ok: [testbed-node-0] 2026-01-17 00:53:44.335434 | orchestrator | ok: [testbed-node-2] 2026-01-17 00:53:44.335441 | orchestrator | 2026-01-17 00:53:44.335447 | orchestrator | TASK [ovn-db : Configure OVN NB connection settings] *************************** 2026-01-17 00:53:44.335454 | orchestrator | Saturday 17 January 2026 00:53:05 +0000 (0:00:00.994) 0:01:56.409 ****** 2026-01-17 00:53:44.335461 | orchestrator | skipping: [testbed-node-1] 2026-01-17 00:53:44.335467 | orchestrator | skipping: [testbed-node-2] 2026-01-17 00:53:44.335474 | orchestrator | changed: [testbed-node-0] 2026-01-17 00:53:44.335481 | orchestrator | 2026-01-17 00:53:44.335488 | orchestrator | TASK [ovn-db : Get OVN_Southbound cluster leader] ****************************** 2026-01-17 00:53:44.335494 | orchestrator | Saturday 17 January 2026 00:53:06 +0000 (0:00:00.722) 0:01:57.131 ****** 2026-01-17 00:53:44.335500 | orchestrator | ok: [testbed-node-0] 2026-01-17 00:53:44.335506 | orchestrator | ok: [testbed-node-1] 2026-01-17 00:53:44.335512 | orchestrator | ok: [testbed-node-2] 2026-01-17 00:53:44.335518 | orchestrator | 2026-01-17 00:53:44.335523 | orchestrator | TASK [ovn-db : Configure OVN SB connection settings] *************************** 2026-01-17 00:53:44.335529 | orchestrator | Saturday 17 January 2026 00:53:07 +0000 (0:00:00.832) 0:01:57.963 ****** 2026-01-17 00:53:44.335535 | orchestrator | skipping: [testbed-node-1] 2026-01-17 00:53:44.335540 | orchestrator | skipping: [testbed-node-2] 2026-01-17 00:53:44.335546 | orchestrator | changed: [testbed-node-0] 2026-01-17 00:53:44.335551 | orchestrator | 2026-01-17 00:53:44.335557 | orchestrator | TASK [ovn-db : Wait for ovn-nb-db] ********************************************* 2026-01-17 00:53:44.335562 | orchestrator | Saturday 17 January 2026 00:53:07 +0000 (0:00:00.653) 0:01:58.617 ****** 2026-01-17 00:53:44.335567 | orchestrator | ok: [testbed-node-0] 2026-01-17 00:53:44.335573 | orchestrator | ok: [testbed-node-1] 2026-01-17 00:53:44.335582 | orchestrator | ok: [testbed-node-2] 2026-01-17 00:53:44.335588 | orchestrator | 2026-01-17 00:53:44.335593 | orchestrator | TASK [ovn-db : Wait for ovn-sb-db] ********************************************* 2026-01-17 00:53:44.335599 | orchestrator | Saturday 17 January 2026 00:53:08 +0000 (0:00:01.102) 0:01:59.719 ****** 2026-01-17 00:53:44.335605 | orchestrator | ok: [testbed-node-0] 2026-01-17 00:53:44.335610 | orchestrator | ok: [testbed-node-1] 2026-01-17 00:53:44.335616 | orchestrator | ok: [testbed-node-2] 2026-01-17 00:53:44.335621 | orchestrator | 2026-01-17 00:53:44.335627 | orchestrator | TASK [ovn-db : Unset bootstrap args fact] ************************************** 2026-01-17 00:53:44.335632 | orchestrator | Saturday 17 January 2026 00:53:09 +0000 (0:00:00.916) 0:02:00.636 ****** 2026-01-17 00:53:44.335638 | orchestrator | ok: [testbed-node-0] 2026-01-17 00:53:44.335644 | orchestrator | ok: [testbed-node-1] 2026-01-17 00:53:44.335650 | orchestrator | ok: [testbed-node-2] 2026-01-17 00:53:44.335659 | orchestrator | 2026-01-17 00:53:44.335668 | orchestrator | TASK [ovn-db : Ensuring config directories exist] ****************************** 2026-01-17 00:53:44.335674 | orchestrator | Saturday 17 January 2026 00:53:10 +0000 (0:00:00.301) 0:02:00.937 ****** 2026-01-17 00:53:44.335680 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-17 00:53:44.335686 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-17 00:53:44.335692 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-17 00:53:44.335697 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-17 00:53:44.335705 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-17 00:53:44.335711 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-17 00:53:44.335716 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-17 00:53:44.335722 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-17 00:53:44.335733 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-17 00:53:44.335738 | orchestrator | 2026-01-17 00:53:44.335752 | orchestrator | TASK [ovn-db : Copying over config.json files for services] ******************** 2026-01-17 00:53:44.335763 | orchestrator | Saturday 17 January 2026 00:53:11 +0000 (0:00:01.522) 0:02:02.460 ****** 2026-01-17 00:53:44.335772 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-17 00:53:44.335778 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-17 00:53:44.335784 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-17 00:53:44.335789 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-17 00:53:44.335795 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-17 00:53:44.335801 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-17 00:53:44.335807 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-17 00:53:44.335812 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-17 00:53:44.335818 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-17 00:53:44.335824 | orchestrator | 2026-01-17 00:53:44.335829 | orchestrator | TASK [ovn-db : Check ovn containers] ******************************************* 2026-01-17 00:53:44.335835 | orchestrator | Saturday 17 January 2026 00:53:16 +0000 (0:00:04.500) 0:02:06.960 ****** 2026-01-17 00:53:44.335849 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-17 00:53:44.335855 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-17 00:53:44.335864 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-17 00:53:44.335870 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-17 00:53:44.335876 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-17 00:53:44.335882 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-17 00:53:44.335887 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-17 00:53:44.335893 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-17 00:53:44.335899 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-17 00:53:44.335904 | orchestrator | 2026-01-17 00:53:44.335910 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-01-17 00:53:44.335915 | orchestrator | Saturday 17 January 2026 00:53:18 +0000 (0:00:02.888) 0:02:09.849 ****** 2026-01-17 00:53:44.335925 | orchestrator | 2026-01-17 00:53:44.335930 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-01-17 00:53:44.335936 | orchestrator | Saturday 17 January 2026 00:53:19 +0000 (0:00:00.069) 0:02:09.918 ****** 2026-01-17 00:53:44.335941 | orchestrator | 2026-01-17 00:53:44.335947 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-01-17 00:53:44.335952 | orchestrator | Saturday 17 January 2026 00:53:19 +0000 (0:00:00.069) 0:02:09.988 ****** 2026-01-17 00:53:44.335957 | orchestrator | 2026-01-17 00:53:44.335963 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-nb-db container] ************************* 2026-01-17 00:53:44.335968 | orchestrator | Saturday 17 January 2026 00:53:19 +0000 (0:00:00.066) 0:02:10.055 ****** 2026-01-17 00:53:44.335974 | orchestrator | changed: [testbed-node-1] 2026-01-17 00:53:44.335979 | orchestrator | changed: [testbed-node-2] 2026-01-17 00:53:44.335985 | orchestrator | 2026-01-17 00:53:44.335994 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-sb-db container] ************************* 2026-01-17 00:53:44.335999 | orchestrator | Saturday 17 January 2026 00:53:25 +0000 (0:00:06.641) 0:02:16.697 ****** 2026-01-17 00:53:44.336004 | orchestrator | changed: [testbed-node-1] 2026-01-17 00:53:44.336010 | orchestrator | changed: [testbed-node-2] 2026-01-17 00:53:44.336016 | orchestrator | 2026-01-17 00:53:44.336021 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-northd container] ************************ 2026-01-17 00:53:44.336026 | orchestrator | Saturday 17 January 2026 00:53:32 +0000 (0:00:06.318) 0:02:23.015 ****** 2026-01-17 00:53:44.336032 | orchestrator | changed: [testbed-node-2] 2026-01-17 00:53:44.336038 | orchestrator | changed: [testbed-node-1] 2026-01-17 00:53:44.336043 | orchestrator | 2026-01-17 00:53:44.336048 | orchestrator | TASK [ovn-db : Wait for leader election] *************************************** 2026-01-17 00:53:44.336057 | orchestrator | Saturday 17 January 2026 00:53:38 +0000 (0:00:06.735) 0:02:29.751 ****** 2026-01-17 00:53:44.336063 | orchestrator | skipping: [testbed-node-0] 2026-01-17 00:53:44.336068 | orchestrator | 2026-01-17 00:53:44.336074 | orchestrator | TASK [ovn-db : Get OVN_Northbound cluster leader] ****************************** 2026-01-17 00:53:44.336079 | orchestrator | Saturday 17 January 2026 00:53:39 +0000 (0:00:00.147) 0:02:29.899 ****** 2026-01-17 00:53:44.336084 | orchestrator | ok: [testbed-node-0] 2026-01-17 00:53:44.336090 | orchestrator | ok: [testbed-node-1] 2026-01-17 00:53:44.336095 | orchestrator | ok: [testbed-node-2] 2026-01-17 00:53:44.336101 | orchestrator | 2026-01-17 00:53:44.336110 | orchestrator | TASK [ovn-db : Configure OVN NB connection settings] *************************** 2026-01-17 00:53:44.336119 | orchestrator | Saturday 17 January 2026 00:53:39 +0000 (0:00:00.836) 0:02:30.735 ****** 2026-01-17 00:53:44.336129 | orchestrator | skipping: [testbed-node-1] 2026-01-17 00:53:44.336144 | orchestrator | skipping: [testbed-node-2] 2026-01-17 00:53:44.336153 | orchestrator | changed: [testbed-node-0] 2026-01-17 00:53:44.336161 | orchestrator | 2026-01-17 00:53:44.336170 | orchestrator | TASK [ovn-db : Get OVN_Southbound cluster leader] ****************************** 2026-01-17 00:53:44.336180 | orchestrator | Saturday 17 January 2026 00:53:40 +0000 (0:00:00.750) 0:02:31.486 ****** 2026-01-17 00:53:44.336187 | orchestrator | ok: [testbed-node-1] 2026-01-17 00:53:44.336192 | orchestrator | ok: [testbed-node-0] 2026-01-17 00:53:44.336198 | orchestrator | ok: [testbed-node-2] 2026-01-17 00:53:44.336203 | orchestrator | 2026-01-17 00:53:44.336209 | orchestrator | TASK [ovn-db : Configure OVN SB connection settings] *************************** 2026-01-17 00:53:44.336214 | orchestrator | Saturday 17 January 2026 00:53:41 +0000 (0:00:00.844) 0:02:32.331 ****** 2026-01-17 00:53:44.336223 | orchestrator | skipping: [testbed-node-1] 2026-01-17 00:53:44.336232 | orchestrator | skipping: [testbed-node-2] 2026-01-17 00:53:44.336244 | orchestrator | changed: [testbed-node-0] 2026-01-17 00:53:44.336281 | orchestrator | 2026-01-17 00:53:44.336291 | orchestrator | TASK [ovn-db : Wait for ovn-nb-db] ********************************************* 2026-01-17 00:53:44.336301 | orchestrator | Saturday 17 January 2026 00:53:42 +0000 (0:00:00.649) 0:02:32.980 ****** 2026-01-17 00:53:44.336310 | orchestrator | ok: [testbed-node-0] 2026-01-17 00:53:44.336319 | orchestrator | ok: [testbed-node-1] 2026-01-17 00:53:44.336335 | orchestrator | ok: [testbed-node-2] 2026-01-17 00:53:44.336343 | orchestrator | 2026-01-17 00:53:44.336352 | orchestrator | TASK [ovn-db : Wait for ovn-sb-db] ********************************************* 2026-01-17 00:53:44.336361 | orchestrator | Saturday 17 January 2026 00:53:42 +0000 (0:00:00.749) 0:02:33.729 ****** 2026-01-17 00:53:44.336370 | orchestrator | ok: [testbed-node-0] 2026-01-17 00:53:44.336377 | orchestrator | ok: [testbed-node-1] 2026-01-17 00:53:44.336385 | orchestrator | ok: [testbed-node-2] 2026-01-17 00:53:44.336394 | orchestrator | 2026-01-17 00:53:44.336403 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-17 00:53:44.336414 | orchestrator | testbed-node-0 : ok=44  changed=18  unreachable=0 failed=0 skipped=20  rescued=0 ignored=0 2026-01-17 00:53:44.336424 | orchestrator | testbed-node-1 : ok=43  changed=19  unreachable=0 failed=0 skipped=22  rescued=0 ignored=0 2026-01-17 00:53:44.336434 | orchestrator | testbed-node-2 : ok=43  changed=19  unreachable=0 failed=0 skipped=22  rescued=0 ignored=0 2026-01-17 00:53:44.336443 | orchestrator | testbed-node-3 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-17 00:53:44.336452 | orchestrator | testbed-node-4 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-17 00:53:44.336463 | orchestrator | testbed-node-5 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-17 00:53:44.336469 | orchestrator | 2026-01-17 00:53:44.336474 | orchestrator | 2026-01-17 00:53:44.336480 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-17 00:53:44.336485 | orchestrator | Saturday 17 January 2026 00:53:43 +0000 (0:00:00.952) 0:02:34.682 ****** 2026-01-17 00:53:44.336491 | orchestrator | =============================================================================== 2026-01-17 00:53:44.336496 | orchestrator | ovn-controller : Restart ovn-controller container ---------------------- 28.66s 2026-01-17 00:53:44.336502 | orchestrator | ovn-controller : Configure OVN in OVSDB -------------------------------- 23.60s 2026-01-17 00:53:44.336508 | orchestrator | ovn-db : Restart ovn-nb-db container ----------------------------------- 14.42s 2026-01-17 00:53:44.336513 | orchestrator | ovn-db : Restart ovn-northd container ---------------------------------- 14.20s 2026-01-17 00:53:44.336519 | orchestrator | ovn-db : Restart ovn-sb-db container ----------------------------------- 13.99s 2026-01-17 00:53:44.336524 | orchestrator | ovn-db : Copying over config.json files for services -------------------- 4.50s 2026-01-17 00:53:44.336530 | orchestrator | ovn-db : Copying over config.json files for services -------------------- 4.22s 2026-01-17 00:53:44.336541 | orchestrator | ovn-controller : Create br-int bridge on OpenvSwitch -------------------- 3.00s 2026-01-17 00:53:44.336547 | orchestrator | ovn-db : Check ovn containers ------------------------------------------- 2.89s 2026-01-17 00:53:44.336552 | orchestrator | ovn-controller : Reload systemd config ---------------------------------- 2.86s 2026-01-17 00:53:44.336558 | orchestrator | ovn-db : Check ovn containers ------------------------------------------- 2.63s 2026-01-17 00:53:44.336563 | orchestrator | ovn-controller : Copying over config.json files for services ------------ 1.99s 2026-01-17 00:53:44.336569 | orchestrator | ovn-controller : Copying over systemd override -------------------------- 1.78s 2026-01-17 00:53:44.336574 | orchestrator | ovn-controller : Ensuring config directories exist ---------------------- 1.67s 2026-01-17 00:53:44.336585 | orchestrator | ovn-controller : Check ovn-controller containers ------------------------ 1.64s 2026-01-17 00:53:44.336591 | orchestrator | ovn-db : Ensuring config directories exist ------------------------------ 1.54s 2026-01-17 00:53:44.336596 | orchestrator | ovn-db : Ensuring config directories exist ------------------------------ 1.52s 2026-01-17 00:53:44.336602 | orchestrator | ovn-controller : Ensuring systemd override directory exists ------------- 1.25s 2026-01-17 00:53:44.336613 | orchestrator | ovn-db : Checking for any existing OVN DB container volumes ------------- 1.17s 2026-01-17 00:53:44.336618 | orchestrator | ovn-controller : include_tasks ------------------------------------------ 1.14s 2026-01-17 00:53:44.336623 | orchestrator | 2026-01-17 00:53:44 | INFO  | Task 1a06fc1e-dccf-48c7-9307-96e1c4567bb9 is in state STARTED 2026-01-17 00:53:44.336629 | orchestrator | 2026-01-17 00:53:44 | INFO  | Wait 1 second(s) until the next check 2026-01-17 00:53:47.383504 | orchestrator | 2026-01-17 00:53:47 | INFO  | Task 7002a706-1e96-40e0-a78c-3e70f3715b86 is in state STARTED 2026-01-17 00:53:47.383836 | orchestrator | 2026-01-17 00:53:47 | INFO  | Task 1a06fc1e-dccf-48c7-9307-96e1c4567bb9 is in state STARTED 2026-01-17 00:53:47.383964 | orchestrator | 2026-01-17 00:53:47 | INFO  | Wait 1 second(s) until the next check 2026-01-17 00:53:50.442130 | orchestrator | 2026-01-17 00:53:50 | INFO  | Task 7002a706-1e96-40e0-a78c-3e70f3715b86 is in state STARTED 2026-01-17 00:53:50.443035 | orchestrator | 2026-01-17 00:53:50 | INFO  | Task 1a06fc1e-dccf-48c7-9307-96e1c4567bb9 is in state STARTED 2026-01-17 00:53:50.443062 | orchestrator | 2026-01-17 00:53:50 | INFO  | Wait 1 second(s) until the next check 2026-01-17 00:53:53.486665 | orchestrator | 2026-01-17 00:53:53 | INFO  | Task 7002a706-1e96-40e0-a78c-3e70f3715b86 is in state STARTED 2026-01-17 00:53:53.486762 | orchestrator | 2026-01-17 00:53:53 | INFO  | Task 1a06fc1e-dccf-48c7-9307-96e1c4567bb9 is in state STARTED 2026-01-17 00:53:53.486778 | orchestrator | 2026-01-17 00:53:53 | INFO  | Wait 1 second(s) until the next check 2026-01-17 00:53:56.524867 | orchestrator | 2026-01-17 00:53:56 | INFO  | Task 7002a706-1e96-40e0-a78c-3e70f3715b86 is in state STARTED 2026-01-17 00:53:56.526857 | orchestrator | 2026-01-17 00:53:56 | INFO  | Task 1a06fc1e-dccf-48c7-9307-96e1c4567bb9 is in state STARTED 2026-01-17 00:53:56.527006 | orchestrator | 2026-01-17 00:53:56 | INFO  | Wait 1 second(s) until the next check 2026-01-17 00:53:59.571894 | orchestrator | 2026-01-17 00:53:59 | INFO  | Task 7002a706-1e96-40e0-a78c-3e70f3715b86 is in state STARTED 2026-01-17 00:53:59.574583 | orchestrator | 2026-01-17 00:53:59 | INFO  | Task 1a06fc1e-dccf-48c7-9307-96e1c4567bb9 is in state STARTED 2026-01-17 00:53:59.574660 | orchestrator | 2026-01-17 00:53:59 | INFO  | Wait 1 second(s) until the next check 2026-01-17 00:54:02.632024 | orchestrator | 2026-01-17 00:54:02 | INFO  | Task 7002a706-1e96-40e0-a78c-3e70f3715b86 is in state STARTED 2026-01-17 00:54:02.633738 | orchestrator | 2026-01-17 00:54:02 | INFO  | Task 1a06fc1e-dccf-48c7-9307-96e1c4567bb9 is in state STARTED 2026-01-17 00:54:02.633795 | orchestrator | 2026-01-17 00:54:02 | INFO  | Wait 1 second(s) until the next check 2026-01-17 00:54:05.680220 | orchestrator | 2026-01-17 00:54:05 | INFO  | Task 7002a706-1e96-40e0-a78c-3e70f3715b86 is in state STARTED 2026-01-17 00:54:05.681470 | orchestrator | 2026-01-17 00:54:05 | INFO  | Task 1a06fc1e-dccf-48c7-9307-96e1c4567bb9 is in state STARTED 2026-01-17 00:54:05.681520 | orchestrator | 2026-01-17 00:54:05 | INFO  | Wait 1 second(s) until the next check 2026-01-17 00:54:08.728842 | orchestrator | 2026-01-17 00:54:08 | INFO  | Task 7002a706-1e96-40e0-a78c-3e70f3715b86 is in state STARTED 2026-01-17 00:54:08.730523 | orchestrator | 2026-01-17 00:54:08 | INFO  | Task 1a06fc1e-dccf-48c7-9307-96e1c4567bb9 is in state STARTED 2026-01-17 00:54:08.730678 | orchestrator | 2026-01-17 00:54:08 | INFO  | Wait 1 second(s) until the next check 2026-01-17 00:54:11.784424 | orchestrator | 2026-01-17 00:54:11 | INFO  | Task 7002a706-1e96-40e0-a78c-3e70f3715b86 is in state STARTED 2026-01-17 00:54:11.786317 | orchestrator | 2026-01-17 00:54:11 | INFO  | Task 1a06fc1e-dccf-48c7-9307-96e1c4567bb9 is in state STARTED 2026-01-17 00:54:11.788802 | orchestrator | 2026-01-17 00:54:11 | INFO  | Wait 1 second(s) until the next check 2026-01-17 00:54:14.829999 | orchestrator | 2026-01-17 00:54:14 | INFO  | Task 7002a706-1e96-40e0-a78c-3e70f3715b86 is in state STARTED 2026-01-17 00:54:14.831613 | orchestrator | 2026-01-17 00:54:14 | INFO  | Task 1a06fc1e-dccf-48c7-9307-96e1c4567bb9 is in state STARTED 2026-01-17 00:54:14.831864 | orchestrator | 2026-01-17 00:54:14 | INFO  | Wait 1 second(s) until the next check 2026-01-17 00:54:17.878270 | orchestrator | 2026-01-17 00:54:17 | INFO  | Task 7002a706-1e96-40e0-a78c-3e70f3715b86 is in state STARTED 2026-01-17 00:54:17.884119 | orchestrator | 2026-01-17 00:54:17 | INFO  | Task 1a06fc1e-dccf-48c7-9307-96e1c4567bb9 is in state STARTED 2026-01-17 00:54:17.884198 | orchestrator | 2026-01-17 00:54:17 | INFO  | Wait 1 second(s) until the next check 2026-01-17 00:54:20.933338 | orchestrator | 2026-01-17 00:54:20 | INFO  | Task 7002a706-1e96-40e0-a78c-3e70f3715b86 is in state STARTED 2026-01-17 00:54:20.936911 | orchestrator | 2026-01-17 00:54:20 | INFO  | Task 1a06fc1e-dccf-48c7-9307-96e1c4567bb9 is in state STARTED 2026-01-17 00:54:20.936968 | orchestrator | 2026-01-17 00:54:20 | INFO  | Wait 1 second(s) until the next check 2026-01-17 00:54:23.979301 | orchestrator | 2026-01-17 00:54:23 | INFO  | Task 7002a706-1e96-40e0-a78c-3e70f3715b86 is in state STARTED 2026-01-17 00:54:23.979711 | orchestrator | 2026-01-17 00:54:23 | INFO  | Task 1a06fc1e-dccf-48c7-9307-96e1c4567bb9 is in state STARTED 2026-01-17 00:54:23.979756 | orchestrator | 2026-01-17 00:54:23 | INFO  | Wait 1 second(s) until the next check 2026-01-17 00:54:27.038002 | orchestrator | 2026-01-17 00:54:27 | INFO  | Task 7002a706-1e96-40e0-a78c-3e70f3715b86 is in state STARTED 2026-01-17 00:54:27.038796 | orchestrator | 2026-01-17 00:54:27 | INFO  | Task 1a06fc1e-dccf-48c7-9307-96e1c4567bb9 is in state STARTED 2026-01-17 00:54:27.040344 | orchestrator | 2026-01-17 00:54:27 | INFO  | Wait 1 second(s) until the next check 2026-01-17 00:54:30.081909 | orchestrator | 2026-01-17 00:54:30 | INFO  | Task 7002a706-1e96-40e0-a78c-3e70f3715b86 is in state STARTED 2026-01-17 00:54:30.083188 | orchestrator | 2026-01-17 00:54:30 | INFO  | Task 1a06fc1e-dccf-48c7-9307-96e1c4567bb9 is in state STARTED 2026-01-17 00:54:30.083258 | orchestrator | 2026-01-17 00:54:30 | INFO  | Wait 1 second(s) until the next check 2026-01-17 00:54:33.123862 | orchestrator | 2026-01-17 00:54:33 | INFO  | Task 7002a706-1e96-40e0-a78c-3e70f3715b86 is in state STARTED 2026-01-17 00:54:33.126789 | orchestrator | 2026-01-17 00:54:33 | INFO  | Task 1a06fc1e-dccf-48c7-9307-96e1c4567bb9 is in state STARTED 2026-01-17 00:54:33.126965 | orchestrator | 2026-01-17 00:54:33 | INFO  | Wait 1 second(s) until the next check 2026-01-17 00:54:36.166250 | orchestrator | 2026-01-17 00:54:36 | INFO  | Task 7002a706-1e96-40e0-a78c-3e70f3715b86 is in state STARTED 2026-01-17 00:54:36.167577 | orchestrator | 2026-01-17 00:54:36 | INFO  | Task 1a06fc1e-dccf-48c7-9307-96e1c4567bb9 is in state STARTED 2026-01-17 00:54:36.167853 | orchestrator | 2026-01-17 00:54:36 | INFO  | Wait 1 second(s) until the next check 2026-01-17 00:54:39.210489 | orchestrator | 2026-01-17 00:54:39 | INFO  | Task 7002a706-1e96-40e0-a78c-3e70f3715b86 is in state STARTED 2026-01-17 00:54:39.212785 | orchestrator | 2026-01-17 00:54:39 | INFO  | Task 1a06fc1e-dccf-48c7-9307-96e1c4567bb9 is in state STARTED 2026-01-17 00:54:39.212960 | orchestrator | 2026-01-17 00:54:39 | INFO  | Wait 1 second(s) until the next check 2026-01-17 00:54:42.257634 | orchestrator | 2026-01-17 00:54:42 | INFO  | Task 7002a706-1e96-40e0-a78c-3e70f3715b86 is in state STARTED 2026-01-17 00:54:42.258516 | orchestrator | 2026-01-17 00:54:42 | INFO  | Task 1a06fc1e-dccf-48c7-9307-96e1c4567bb9 is in state STARTED 2026-01-17 00:54:42.259449 | orchestrator | 2026-01-17 00:54:42 | INFO  | Wait 1 second(s) until the next check 2026-01-17 00:54:45.292563 | orchestrator | 2026-01-17 00:54:45 | INFO  | Task 7002a706-1e96-40e0-a78c-3e70f3715b86 is in state STARTED 2026-01-17 00:54:45.293269 | orchestrator | 2026-01-17 00:54:45 | INFO  | Task 1a06fc1e-dccf-48c7-9307-96e1c4567bb9 is in state STARTED 2026-01-17 00:54:45.293330 | orchestrator | 2026-01-17 00:54:45 | INFO  | Wait 1 second(s) until the next check 2026-01-17 00:54:48.347893 | orchestrator | 2026-01-17 00:54:48 | INFO  | Task 7002a706-1e96-40e0-a78c-3e70f3715b86 is in state STARTED 2026-01-17 00:54:48.350559 | orchestrator | 2026-01-17 00:54:48 | INFO  | Task 1a06fc1e-dccf-48c7-9307-96e1c4567bb9 is in state STARTED 2026-01-17 00:54:48.351173 | orchestrator | 2026-01-17 00:54:48 | INFO  | Wait 1 second(s) until the next check 2026-01-17 00:54:51.383764 | orchestrator | 2026-01-17 00:54:51 | INFO  | Task 7002a706-1e96-40e0-a78c-3e70f3715b86 is in state STARTED 2026-01-17 00:54:51.385921 | orchestrator | 2026-01-17 00:54:51 | INFO  | Task 1a06fc1e-dccf-48c7-9307-96e1c4567bb9 is in state STARTED 2026-01-17 00:54:51.386211 | orchestrator | 2026-01-17 00:54:51 | INFO  | Wait 1 second(s) until the next check 2026-01-17 00:54:54.436077 | orchestrator | 2026-01-17 00:54:54 | INFO  | Task 7002a706-1e96-40e0-a78c-3e70f3715b86 is in state STARTED 2026-01-17 00:54:54.439423 | orchestrator | 2026-01-17 00:54:54 | INFO  | Task 1a06fc1e-dccf-48c7-9307-96e1c4567bb9 is in state STARTED 2026-01-17 00:54:54.439537 | orchestrator | 2026-01-17 00:54:54 | INFO  | Wait 1 second(s) until the next check 2026-01-17 00:54:57.504145 | orchestrator | 2026-01-17 00:54:57 | INFO  | Task 7002a706-1e96-40e0-a78c-3e70f3715b86 is in state STARTED 2026-01-17 00:54:57.505781 | orchestrator | 2026-01-17 00:54:57 | INFO  | Task 1a06fc1e-dccf-48c7-9307-96e1c4567bb9 is in state STARTED 2026-01-17 00:54:57.505835 | orchestrator | 2026-01-17 00:54:57 | INFO  | Wait 1 second(s) until the next check 2026-01-17 00:55:00.555450 | orchestrator | 2026-01-17 00:55:00 | INFO  | Task 7002a706-1e96-40e0-a78c-3e70f3715b86 is in state STARTED 2026-01-17 00:55:00.555980 | orchestrator | 2026-01-17 00:55:00 | INFO  | Task 1a06fc1e-dccf-48c7-9307-96e1c4567bb9 is in state STARTED 2026-01-17 00:55:00.556001 | orchestrator | 2026-01-17 00:55:00 | INFO  | Wait 1 second(s) until the next check 2026-01-17 00:55:03.620849 | orchestrator | 2026-01-17 00:55:03 | INFO  | Task 7002a706-1e96-40e0-a78c-3e70f3715b86 is in state STARTED 2026-01-17 00:55:03.623353 | orchestrator | 2026-01-17 00:55:03 | INFO  | Task 1a06fc1e-dccf-48c7-9307-96e1c4567bb9 is in state STARTED 2026-01-17 00:55:03.623413 | orchestrator | 2026-01-17 00:55:03 | INFO  | Wait 1 second(s) until the next check 2026-01-17 00:55:06.664094 | orchestrator | 2026-01-17 00:55:06 | INFO  | Task 7002a706-1e96-40e0-a78c-3e70f3715b86 is in state STARTED 2026-01-17 00:55:06.666647 | orchestrator | 2026-01-17 00:55:06 | INFO  | Task 1a06fc1e-dccf-48c7-9307-96e1c4567bb9 is in state STARTED 2026-01-17 00:55:06.666770 | orchestrator | 2026-01-17 00:55:06 | INFO  | Wait 1 second(s) until the next check 2026-01-17 00:55:09.712278 | orchestrator | 2026-01-17 00:55:09 | INFO  | Task 7002a706-1e96-40e0-a78c-3e70f3715b86 is in state STARTED 2026-01-17 00:55:09.713498 | orchestrator | 2026-01-17 00:55:09 | INFO  | Task 1a06fc1e-dccf-48c7-9307-96e1c4567bb9 is in state STARTED 2026-01-17 00:55:09.713546 | orchestrator | 2026-01-17 00:55:09 | INFO  | Wait 1 second(s) until the next check 2026-01-17 00:55:12.757840 | orchestrator | 2026-01-17 00:55:12 | INFO  | Task 7002a706-1e96-40e0-a78c-3e70f3715b86 is in state STARTED 2026-01-17 00:55:12.758596 | orchestrator | 2026-01-17 00:55:12 | INFO  | Task 1a06fc1e-dccf-48c7-9307-96e1c4567bb9 is in state STARTED 2026-01-17 00:55:12.758646 | orchestrator | 2026-01-17 00:55:12 | INFO  | Wait 1 second(s) until the next check 2026-01-17 00:55:15.802101 | orchestrator | 2026-01-17 00:55:15 | INFO  | Task 7002a706-1e96-40e0-a78c-3e70f3715b86 is in state STARTED 2026-01-17 00:55:15.804061 | orchestrator | 2026-01-17 00:55:15 | INFO  | Task 1a06fc1e-dccf-48c7-9307-96e1c4567bb9 is in state STARTED 2026-01-17 00:55:15.804102 | orchestrator | 2026-01-17 00:55:15 | INFO  | Wait 1 second(s) until the next check 2026-01-17 00:55:18.852658 | orchestrator | 2026-01-17 00:55:18 | INFO  | Task 7002a706-1e96-40e0-a78c-3e70f3715b86 is in state STARTED 2026-01-17 00:55:18.854790 | orchestrator | 2026-01-17 00:55:18 | INFO  | Task 1a06fc1e-dccf-48c7-9307-96e1c4567bb9 is in state STARTED 2026-01-17 00:55:18.854840 | orchestrator | 2026-01-17 00:55:18 | INFO  | Wait 1 second(s) until the next check 2026-01-17 00:55:21.909972 | orchestrator | 2026-01-17 00:55:21 | INFO  | Task 7002a706-1e96-40e0-a78c-3e70f3715b86 is in state STARTED 2026-01-17 00:55:21.910569 | orchestrator | 2026-01-17 00:55:21 | INFO  | Task 1a06fc1e-dccf-48c7-9307-96e1c4567bb9 is in state STARTED 2026-01-17 00:55:21.910616 | orchestrator | 2026-01-17 00:55:21 | INFO  | Wait 1 second(s) until the next check 2026-01-17 00:55:24.956594 | orchestrator | 2026-01-17 00:55:24 | INFO  | Task 7002a706-1e96-40e0-a78c-3e70f3715b86 is in state STARTED 2026-01-17 00:55:24.958844 | orchestrator | 2026-01-17 00:55:24 | INFO  | Task 1a06fc1e-dccf-48c7-9307-96e1c4567bb9 is in state STARTED 2026-01-17 00:55:24.958911 | orchestrator | 2026-01-17 00:55:24 | INFO  | Wait 1 second(s) until the next check 2026-01-17 00:55:28.000673 | orchestrator | 2026-01-17 00:55:28 | INFO  | Task 7002a706-1e96-40e0-a78c-3e70f3715b86 is in state STARTED 2026-01-17 00:55:28.000744 | orchestrator | 2026-01-17 00:55:28 | INFO  | Task 1a06fc1e-dccf-48c7-9307-96e1c4567bb9 is in state STARTED 2026-01-17 00:55:28.000752 | orchestrator | 2026-01-17 00:55:28 | INFO  | Wait 1 second(s) until the next check 2026-01-17 00:55:31.044389 | orchestrator | 2026-01-17 00:55:31 | INFO  | Task 7002a706-1e96-40e0-a78c-3e70f3715b86 is in state STARTED 2026-01-17 00:55:31.048464 | orchestrator | 2026-01-17 00:55:31 | INFO  | Task 1a06fc1e-dccf-48c7-9307-96e1c4567bb9 is in state STARTED 2026-01-17 00:55:31.048533 | orchestrator | 2026-01-17 00:55:31 | INFO  | Wait 1 second(s) until the next check 2026-01-17 00:55:34.099997 | orchestrator | 2026-01-17 00:55:34 | INFO  | Task 7002a706-1e96-40e0-a78c-3e70f3715b86 is in state STARTED 2026-01-17 00:55:34.102707 | orchestrator | 2026-01-17 00:55:34 | INFO  | Task 1a06fc1e-dccf-48c7-9307-96e1c4567bb9 is in state STARTED 2026-01-17 00:55:34.102770 | orchestrator | 2026-01-17 00:55:34 | INFO  | Wait 1 second(s) until the next check 2026-01-17 00:55:37.147520 | orchestrator | 2026-01-17 00:55:37 | INFO  | Task 7002a706-1e96-40e0-a78c-3e70f3715b86 is in state STARTED 2026-01-17 00:55:37.150216 | orchestrator | 2026-01-17 00:55:37 | INFO  | Task 1a06fc1e-dccf-48c7-9307-96e1c4567bb9 is in state STARTED 2026-01-17 00:55:37.150279 | orchestrator | 2026-01-17 00:55:37 | INFO  | Wait 1 second(s) until the next check 2026-01-17 00:55:40.193570 | orchestrator | 2026-01-17 00:55:40 | INFO  | Task 7002a706-1e96-40e0-a78c-3e70f3715b86 is in state STARTED 2026-01-17 00:55:40.196309 | orchestrator | 2026-01-17 00:55:40 | INFO  | Task 1a06fc1e-dccf-48c7-9307-96e1c4567bb9 is in state STARTED 2026-01-17 00:55:40.196435 | orchestrator | 2026-01-17 00:55:40 | INFO  | Wait 1 second(s) until the next check 2026-01-17 00:55:43.240541 | orchestrator | 2026-01-17 00:55:43 | INFO  | Task 7002a706-1e96-40e0-a78c-3e70f3715b86 is in state STARTED 2026-01-17 00:55:43.242485 | orchestrator | 2026-01-17 00:55:43 | INFO  | Task 1a06fc1e-dccf-48c7-9307-96e1c4567bb9 is in state STARTED 2026-01-17 00:55:43.242547 | orchestrator | 2026-01-17 00:55:43 | INFO  | Wait 1 second(s) until the next check 2026-01-17 00:55:46.281668 | orchestrator | 2026-01-17 00:55:46 | INFO  | Task 7002a706-1e96-40e0-a78c-3e70f3715b86 is in state STARTED 2026-01-17 00:55:46.283174 | orchestrator | 2026-01-17 00:55:46 | INFO  | Task 1a06fc1e-dccf-48c7-9307-96e1c4567bb9 is in state STARTED 2026-01-17 00:55:46.283225 | orchestrator | 2026-01-17 00:55:46 | INFO  | Wait 1 second(s) until the next check 2026-01-17 00:55:49.324358 | orchestrator | 2026-01-17 00:55:49 | INFO  | Task 7002a706-1e96-40e0-a78c-3e70f3715b86 is in state STARTED 2026-01-17 00:55:49.328566 | orchestrator | 2026-01-17 00:55:49 | INFO  | Task 1a06fc1e-dccf-48c7-9307-96e1c4567bb9 is in state STARTED 2026-01-17 00:55:49.328636 | orchestrator | 2026-01-17 00:55:49 | INFO  | Wait 1 second(s) until the next check 2026-01-17 00:55:52.376599 | orchestrator | 2026-01-17 00:55:52 | INFO  | Task 7002a706-1e96-40e0-a78c-3e70f3715b86 is in state STARTED 2026-01-17 00:55:52.377342 | orchestrator | 2026-01-17 00:55:52 | INFO  | Task 1a06fc1e-dccf-48c7-9307-96e1c4567bb9 is in state STARTED 2026-01-17 00:55:52.377416 | orchestrator | 2026-01-17 00:55:52 | INFO  | Wait 1 second(s) until the next check 2026-01-17 00:55:55.417874 | orchestrator | 2026-01-17 00:55:55 | INFO  | Task 7002a706-1e96-40e0-a78c-3e70f3715b86 is in state STARTED 2026-01-17 00:55:55.422175 | orchestrator | 2026-01-17 00:55:55 | INFO  | Task 1a06fc1e-dccf-48c7-9307-96e1c4567bb9 is in state STARTED 2026-01-17 00:55:55.422256 | orchestrator | 2026-01-17 00:55:55 | INFO  | Wait 1 second(s) until the next check 2026-01-17 00:55:58.455583 | orchestrator | 2026-01-17 00:55:58 | INFO  | Task 7002a706-1e96-40e0-a78c-3e70f3715b86 is in state STARTED 2026-01-17 00:55:58.456723 | orchestrator | 2026-01-17 00:55:58 | INFO  | Task 1a06fc1e-dccf-48c7-9307-96e1c4567bb9 is in state STARTED 2026-01-17 00:55:58.456778 | orchestrator | 2026-01-17 00:55:58 | INFO  | Wait 1 second(s) until the next check 2026-01-17 00:56:01.501143 | orchestrator | 2026-01-17 00:56:01 | INFO  | Task 7002a706-1e96-40e0-a78c-3e70f3715b86 is in state STARTED 2026-01-17 00:56:01.501846 | orchestrator | 2026-01-17 00:56:01 | INFO  | Task 1a06fc1e-dccf-48c7-9307-96e1c4567bb9 is in state STARTED 2026-01-17 00:56:01.501877 | orchestrator | 2026-01-17 00:56:01 | INFO  | Wait 1 second(s) until the next check 2026-01-17 00:56:04.550004 | orchestrator | 2026-01-17 00:56:04 | INFO  | Task 7002a706-1e96-40e0-a78c-3e70f3715b86 is in state STARTED 2026-01-17 00:56:04.552694 | orchestrator | 2026-01-17 00:56:04 | INFO  | Task 1a06fc1e-dccf-48c7-9307-96e1c4567bb9 is in state STARTED 2026-01-17 00:56:04.552779 | orchestrator | 2026-01-17 00:56:04 | INFO  | Wait 1 second(s) until the next check 2026-01-17 00:56:07.599577 | orchestrator | 2026-01-17 00:56:07 | INFO  | Task 7002a706-1e96-40e0-a78c-3e70f3715b86 is in state STARTED 2026-01-17 00:56:07.601822 | orchestrator | 2026-01-17 00:56:07 | INFO  | Task 1a06fc1e-dccf-48c7-9307-96e1c4567bb9 is in state STARTED 2026-01-17 00:56:07.601871 | orchestrator | 2026-01-17 00:56:07 | INFO  | Wait 1 second(s) until the next check 2026-01-17 00:56:10.643288 | orchestrator | 2026-01-17 00:56:10 | INFO  | Task 7002a706-1e96-40e0-a78c-3e70f3715b86 is in state STARTED 2026-01-17 00:56:10.643920 | orchestrator | 2026-01-17 00:56:10 | INFO  | Task 1a06fc1e-dccf-48c7-9307-96e1c4567bb9 is in state STARTED 2026-01-17 00:56:10.643935 | orchestrator | 2026-01-17 00:56:10 | INFO  | Wait 1 second(s) until the next check 2026-01-17 00:56:13.674310 | orchestrator | 2026-01-17 00:56:13 | INFO  | Task 7002a706-1e96-40e0-a78c-3e70f3715b86 is in state STARTED 2026-01-17 00:56:13.674436 | orchestrator | 2026-01-17 00:56:13 | INFO  | Task 1a06fc1e-dccf-48c7-9307-96e1c4567bb9 is in state STARTED 2026-01-17 00:56:13.676956 | orchestrator | 2026-01-17 00:56:13 | INFO  | Wait 1 second(s) until the next check 2026-01-17 00:56:16.729333 | orchestrator | 2026-01-17 00:56:16 | INFO  | Task 7002a706-1e96-40e0-a78c-3e70f3715b86 is in state STARTED 2026-01-17 00:56:16.730453 | orchestrator | 2026-01-17 00:56:16 | INFO  | Task 1a06fc1e-dccf-48c7-9307-96e1c4567bb9 is in state STARTED 2026-01-17 00:56:16.730874 | orchestrator | 2026-01-17 00:56:16 | INFO  | Wait 1 second(s) until the next check 2026-01-17 00:56:19.787248 | orchestrator | 2026-01-17 00:56:19 | INFO  | Task 7002a706-1e96-40e0-a78c-3e70f3715b86 is in state STARTED 2026-01-17 00:56:19.789754 | orchestrator | 2026-01-17 00:56:19 | INFO  | Task 1a06fc1e-dccf-48c7-9307-96e1c4567bb9 is in state STARTED 2026-01-17 00:56:19.789887 | orchestrator | 2026-01-17 00:56:19 | INFO  | Wait 1 second(s) until the next check 2026-01-17 00:56:22.833295 | orchestrator | 2026-01-17 00:56:22 | INFO  | Task 7002a706-1e96-40e0-a78c-3e70f3715b86 is in state STARTED 2026-01-17 00:56:22.834269 | orchestrator | 2026-01-17 00:56:22 | INFO  | Task 1a06fc1e-dccf-48c7-9307-96e1c4567bb9 is in state STARTED 2026-01-17 00:56:22.834308 | orchestrator | 2026-01-17 00:56:22 | INFO  | Wait 1 second(s) until the next check 2026-01-17 00:56:25.885022 | orchestrator | 2026-01-17 00:56:25 | INFO  | Task 7002a706-1e96-40e0-a78c-3e70f3715b86 is in state STARTED 2026-01-17 00:56:25.888978 | orchestrator | 2026-01-17 00:56:25 | INFO  | Task 1a06fc1e-dccf-48c7-9307-96e1c4567bb9 is in state STARTED 2026-01-17 00:56:25.889052 | orchestrator | 2026-01-17 00:56:25 | INFO  | Wait 1 second(s) until the next check 2026-01-17 00:56:28.919454 | orchestrator | 2026-01-17 00:56:28 | INFO  | Task 7002a706-1e96-40e0-a78c-3e70f3715b86 is in state STARTED 2026-01-17 00:56:28.920913 | orchestrator | 2026-01-17 00:56:28 | INFO  | Task 1a06fc1e-dccf-48c7-9307-96e1c4567bb9 is in state STARTED 2026-01-17 00:56:28.921789 | orchestrator | 2026-01-17 00:56:28 | INFO  | Wait 1 second(s) until the next check 2026-01-17 00:56:31.976133 | orchestrator | 2026-01-17 00:56:31 | INFO  | Task 7002a706-1e96-40e0-a78c-3e70f3715b86 is in state STARTED 2026-01-17 00:56:31.976865 | orchestrator | 2026-01-17 00:56:31 | INFO  | Task 1a06fc1e-dccf-48c7-9307-96e1c4567bb9 is in state STARTED 2026-01-17 00:56:31.976893 | orchestrator | 2026-01-17 00:56:31 | INFO  | Wait 1 second(s) until the next check 2026-01-17 00:56:35.020177 | orchestrator | 2026-01-17 00:56:35 | INFO  | Task 7002a706-1e96-40e0-a78c-3e70f3715b86 is in state STARTED 2026-01-17 00:56:35.021634 | orchestrator | 2026-01-17 00:56:35 | INFO  | Task 1a06fc1e-dccf-48c7-9307-96e1c4567bb9 is in state STARTED 2026-01-17 00:56:35.021680 | orchestrator | 2026-01-17 00:56:35 | INFO  | Wait 1 second(s) until the next check 2026-01-17 00:56:38.081275 | orchestrator | 2026-01-17 00:56:38 | INFO  | Task 7002a706-1e96-40e0-a78c-3e70f3715b86 is in state STARTED 2026-01-17 00:56:38.082912 | orchestrator | 2026-01-17 00:56:38 | INFO  | Task 1a06fc1e-dccf-48c7-9307-96e1c4567bb9 is in state STARTED 2026-01-17 00:56:38.082979 | orchestrator | 2026-01-17 00:56:38 | INFO  | Wait 1 second(s) until the next check 2026-01-17 00:56:41.127141 | orchestrator | 2026-01-17 00:56:41 | INFO  | Task a2bd0e71-76fc-40aa-b003-09f7646ecc3c is in state STARTED 2026-01-17 00:56:41.131465 | orchestrator | 2026-01-17 00:56:41 | INFO  | Task 7002a706-1e96-40e0-a78c-3e70f3715b86 is in state SUCCESS 2026-01-17 00:56:41.133429 | orchestrator | 2026-01-17 00:56:41.134861 | orchestrator | 2026-01-17 00:56:41.134905 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-17 00:56:41.134915 | orchestrator | 2026-01-17 00:56:41.134923 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-01-17 00:56:41.134931 | orchestrator | Saturday 17 January 2026 00:49:57 +0000 (0:00:00.474) 0:00:00.474 ****** 2026-01-17 00:56:41.134939 | orchestrator | ok: [testbed-node-0] 2026-01-17 00:56:41.134948 | orchestrator | ok: [testbed-node-1] 2026-01-17 00:56:41.134955 | orchestrator | ok: [testbed-node-2] 2026-01-17 00:56:41.134963 | orchestrator | 2026-01-17 00:56:41.134970 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-17 00:56:41.134992 | orchestrator | Saturday 17 January 2026 00:49:57 +0000 (0:00:00.555) 0:00:01.030 ****** 2026-01-17 00:56:41.135001 | orchestrator | ok: [testbed-node-0] => (item=enable_loadbalancer_True) 2026-01-17 00:56:41.135008 | orchestrator | ok: [testbed-node-1] => (item=enable_loadbalancer_True) 2026-01-17 00:56:41.135015 | orchestrator | ok: [testbed-node-2] => (item=enable_loadbalancer_True) 2026-01-17 00:56:41.135023 | orchestrator | 2026-01-17 00:56:41.135030 | orchestrator | PLAY [Apply role loadbalancer] ************************************************* 2026-01-17 00:56:41.135037 | orchestrator | 2026-01-17 00:56:41.135045 | orchestrator | TASK [loadbalancer : include_tasks] ******************************************** 2026-01-17 00:56:41.135052 | orchestrator | Saturday 17 January 2026 00:49:58 +0000 (0:00:00.861) 0:00:01.891 ****** 2026-01-17 00:56:41.135112 | orchestrator | included: /ansible/roles/loadbalancer/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-17 00:56:41.135120 | orchestrator | 2026-01-17 00:56:41.135132 | orchestrator | TASK [loadbalancer : Check IPv6 support] *************************************** 2026-01-17 00:56:41.135144 | orchestrator | Saturday 17 January 2026 00:49:59 +0000 (0:00:01.279) 0:00:03.170 ****** 2026-01-17 00:56:41.135163 | orchestrator | ok: [testbed-node-0] 2026-01-17 00:56:41.135177 | orchestrator | ok: [testbed-node-1] 2026-01-17 00:56:41.135188 | orchestrator | ok: [testbed-node-2] 2026-01-17 00:56:41.135199 | orchestrator | 2026-01-17 00:56:41.135211 | orchestrator | TASK [Setting sysctl values] *************************************************** 2026-01-17 00:56:41.135222 | orchestrator | Saturday 17 January 2026 00:50:00 +0000 (0:00:01.134) 0:00:04.304 ****** 2026-01-17 00:56:41.135234 | orchestrator | included: sysctl for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-17 00:56:41.135245 | orchestrator | 2026-01-17 00:56:41.135257 | orchestrator | TASK [sysctl : Check IPv6 support] ********************************************* 2026-01-17 00:56:41.135268 | orchestrator | Saturday 17 January 2026 00:50:02 +0000 (0:00:01.171) 0:00:05.476 ****** 2026-01-17 00:56:41.135279 | orchestrator | ok: [testbed-node-0] 2026-01-17 00:56:41.135431 | orchestrator | ok: [testbed-node-1] 2026-01-17 00:56:41.135441 | orchestrator | ok: [testbed-node-2] 2026-01-17 00:56:41.135460 | orchestrator | 2026-01-17 00:56:41.135467 | orchestrator | TASK [sysctl : Setting sysctl values] ****************************************** 2026-01-17 00:56:41.135475 | orchestrator | Saturday 17 January 2026 00:50:03 +0000 (0:00:01.002) 0:00:06.478 ****** 2026-01-17 00:56:41.135482 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2026-01-17 00:56:41.135490 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2026-01-17 00:56:41.135498 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2026-01-17 00:56:41.135505 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2026-01-17 00:56:41.135512 | orchestrator | ok: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2026-01-17 00:56:41.135543 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2026-01-17 00:56:41.135550 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2026-01-17 00:56:41.135557 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2026-01-17 00:56:41.135564 | orchestrator | ok: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2026-01-17 00:56:41.135572 | orchestrator | ok: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2026-01-17 00:56:41.135579 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2026-01-17 00:56:41.135586 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2026-01-17 00:56:41.135593 | orchestrator | 2026-01-17 00:56:41.135600 | orchestrator | TASK [module-load : Load modules] ********************************************** 2026-01-17 00:56:41.135607 | orchestrator | Saturday 17 January 2026 00:50:07 +0000 (0:00:04.223) 0:00:10.702 ****** 2026-01-17 00:56:41.135614 | orchestrator | changed: [testbed-node-0] => (item=ip_vs) 2026-01-17 00:56:41.135622 | orchestrator | changed: [testbed-node-1] => (item=ip_vs) 2026-01-17 00:56:41.135629 | orchestrator | changed: [testbed-node-2] => (item=ip_vs) 2026-01-17 00:56:41.135636 | orchestrator | 2026-01-17 00:56:41.135643 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2026-01-17 00:56:41.135650 | orchestrator | Saturday 17 January 2026 00:50:08 +0000 (0:00:01.157) 0:00:11.860 ****** 2026-01-17 00:56:41.135657 | orchestrator | changed: [testbed-node-0] => (item=ip_vs) 2026-01-17 00:56:41.135664 | orchestrator | changed: [testbed-node-1] => (item=ip_vs) 2026-01-17 00:56:41.135671 | orchestrator | changed: [testbed-node-2] => (item=ip_vs) 2026-01-17 00:56:41.135678 | orchestrator | 2026-01-17 00:56:41.135685 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2026-01-17 00:56:41.135692 | orchestrator | Saturday 17 January 2026 00:50:10 +0000 (0:00:02.397) 0:00:14.257 ****** 2026-01-17 00:56:41.135699 | orchestrator | skipping: [testbed-node-0] => (item=ip_vs)  2026-01-17 00:56:41.135707 | orchestrator | skipping: [testbed-node-0] 2026-01-17 00:56:41.135726 | orchestrator | skipping: [testbed-node-1] => (item=ip_vs)  2026-01-17 00:56:41.135734 | orchestrator | skipping: [testbed-node-1] 2026-01-17 00:56:41.135741 | orchestrator | skipping: [testbed-node-2] => (item=ip_vs)  2026-01-17 00:56:41.135748 | orchestrator | skipping: [testbed-node-2] 2026-01-17 00:56:41.135755 | orchestrator | 2026-01-17 00:56:41.135762 | orchestrator | TASK [loadbalancer : Ensuring config directories exist] ************************ 2026-01-17 00:56:41.135769 | orchestrator | Saturday 17 January 2026 00:50:12 +0000 (0:00:01.496) 0:00:15.754 ****** 2026-01-17 00:56:41.135786 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-01-17 00:56:41.135799 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-01-17 00:56:41.135813 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-01-17 00:56:41.135820 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-01-17 00:56:41.135828 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-01-17 00:56:41.135836 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-01-17 00:56:41.135851 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-01-17 00:56:41.135862 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-01-17 00:56:41.135871 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-01-17 00:56:41.135883 | orchestrator | 2026-01-17 00:56:41.135891 | orchestrator | TASK [loadbalancer : Ensuring haproxy service config subdir exists] ************ 2026-01-17 00:56:41.135898 | orchestrator | Saturday 17 January 2026 00:50:15 +0000 (0:00:03.295) 0:00:19.049 ****** 2026-01-17 00:56:41.135905 | orchestrator | changed: [testbed-node-0] 2026-01-17 00:56:41.135912 | orchestrator | changed: [testbed-node-1] 2026-01-17 00:56:41.135919 | orchestrator | changed: [testbed-node-2] 2026-01-17 00:56:41.135926 | orchestrator | 2026-01-17 00:56:41.135934 | orchestrator | TASK [loadbalancer : Ensuring proxysql service config subdirectories exist] **** 2026-01-17 00:56:41.135941 | orchestrator | Saturday 17 January 2026 00:50:16 +0000 (0:00:01.137) 0:00:20.187 ****** 2026-01-17 00:56:41.135948 | orchestrator | changed: [testbed-node-1] => (item=users) 2026-01-17 00:56:41.135956 | orchestrator | changed: [testbed-node-0] => (item=users) 2026-01-17 00:56:41.135968 | orchestrator | changed: [testbed-node-2] => (item=users) 2026-01-17 00:56:41.135988 | orchestrator | changed: [testbed-node-0] => (item=rules) 2026-01-17 00:56:41.135999 | orchestrator | changed: [testbed-node-1] => (item=rules) 2026-01-17 00:56:41.136011 | orchestrator | changed: [testbed-node-2] => (item=rules) 2026-01-17 00:56:41.136023 | orchestrator | 2026-01-17 00:56:41.136033 | orchestrator | TASK [loadbalancer : Ensuring keepalived checks subdir exists] ***************** 2026-01-17 00:56:41.136045 | orchestrator | Saturday 17 January 2026 00:50:19 +0000 (0:00:02.818) 0:00:23.005 ****** 2026-01-17 00:56:41.136085 | orchestrator | changed: [testbed-node-0] 2026-01-17 00:56:41.136097 | orchestrator | changed: [testbed-node-1] 2026-01-17 00:56:41.136108 | orchestrator | changed: [testbed-node-2] 2026-01-17 00:56:41.136120 | orchestrator | 2026-01-17 00:56:41.136131 | orchestrator | TASK [loadbalancer : Remove mariadb.cfg if proxysql enabled] ******************* 2026-01-17 00:56:41.136144 | orchestrator | Saturday 17 January 2026 00:50:20 +0000 (0:00:01.107) 0:00:24.112 ****** 2026-01-17 00:56:41.136157 | orchestrator | ok: [testbed-node-0] 2026-01-17 00:56:41.136168 | orchestrator | ok: [testbed-node-1] 2026-01-17 00:56:41.136181 | orchestrator | ok: [testbed-node-2] 2026-01-17 00:56:41.136188 | orchestrator | 2026-01-17 00:56:41.136195 | orchestrator | TASK [loadbalancer : Removing checks for services which are disabled] ********** 2026-01-17 00:56:41.136203 | orchestrator | Saturday 17 January 2026 00:50:23 +0000 (0:00:03.180) 0:00:27.292 ****** 2026-01-17 00:56:41.136210 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-01-17 00:56:41.136237 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-17 00:56:41.136246 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-17 00:56:41.136261 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__c8ddcee0a6918eb2a358c777e633789eea21431a', '__omit_place_holder__c8ddcee0a6918eb2a358c777e633789eea21431a'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-01-17 00:56:41.136269 | orchestrator | skipping: [testbed-node-0] 2026-01-17 00:56:41.136303 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-01-17 00:56:41.136312 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-17 00:56:41.136320 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-17 00:56:41.136327 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__c8ddcee0a6918eb2a358c777e633789eea21431a', '__omit_place_holder__c8ddcee0a6918eb2a358c777e633789eea21431a'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-01-17 00:56:41.136335 | orchestrator | skipping: [testbed-node-1] 2026-01-17 00:56:41.136350 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-01-17 00:56:41.136378 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-17 00:56:41.136387 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-17 00:56:41.136394 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__c8ddcee0a6918eb2a358c777e633789eea21431a', '__omit_place_holder__c8ddcee0a6918eb2a358c777e633789eea21431a'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-01-17 00:56:41.136402 | orchestrator | skipping: [testbed-node-2] 2026-01-17 00:56:41.136409 | orchestrator | 2026-01-17 00:56:41.136416 | orchestrator | TASK [loadbalancer : Copying checks for services which are enabled] ************ 2026-01-17 00:56:41.136423 | orchestrator | Saturday 17 January 2026 00:50:25 +0000 (0:00:01.932) 0:00:29.225 ****** 2026-01-17 00:56:41.136431 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-01-17 00:56:41.136579 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-01-17 00:56:41.136595 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-01-17 00:56:41.136613 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-01-17 00:56:41.136645 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-17 00:56:41.136653 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__c8ddcee0a6918eb2a358c777e633789eea21431a', '__omit_place_holder__c8ddcee0a6918eb2a358c777e633789eea21431a'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-01-17 00:56:41.136661 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-01-17 00:56:41.136668 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-17 00:56:41.136676 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__c8ddcee0a6918eb2a358c777e633789eea21431a', '__omit_place_holder__c8ddcee0a6918eb2a358c777e633789eea21431a'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-01-17 00:56:41.136699 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-01-17 00:56:41.136707 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-17 00:56:41.136715 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__c8ddcee0a6918eb2a358c777e633789eea21431a', '__omit_place_holder__c8ddcee0a6918eb2a358c777e633789eea21431a'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-01-17 00:56:41.136723 | orchestrator | 2026-01-17 00:56:41.136730 | orchestrator | TASK [loadbalancer : Copying over config.json files for services] ************** 2026-01-17 00:56:41.136737 | orchestrator | Saturday 17 January 2026 00:50:29 +0000 (0:00:03.257) 0:00:32.482 ****** 2026-01-17 00:56:41.136745 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-01-17 00:56:41.136752 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-01-17 00:56:41.136760 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-01-17 00:56:41.136779 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-01-17 00:56:41.136791 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-01-17 00:56:41.136799 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-01-17 00:56:41.136806 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-01-17 00:56:41.136814 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-01-17 00:56:41.136821 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-01-17 00:56:41.136829 | orchestrator | 2026-01-17 00:56:41.136836 | orchestrator | TASK [loadbalancer : Copying over haproxy.cfg] ********************************* 2026-01-17 00:56:41.136848 | orchestrator | Saturday 17 January 2026 00:50:33 +0000 (0:00:03.929) 0:00:36.411 ****** 2026-01-17 00:56:41.136868 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2026-01-17 00:56:41.136880 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2026-01-17 00:56:41.136892 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2026-01-17 00:56:41.136903 | orchestrator | 2026-01-17 00:56:41.136915 | orchestrator | TASK [loadbalancer : Copying over proxysql config] ***************************** 2026-01-17 00:56:41.136926 | orchestrator | Saturday 17 January 2026 00:50:37 +0000 (0:00:04.846) 0:00:41.257 ****** 2026-01-17 00:56:41.136937 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2026-01-17 00:56:41.136947 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2026-01-17 00:56:41.136960 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2026-01-17 00:56:41.136972 | orchestrator | 2026-01-17 00:56:41.137779 | orchestrator | TASK [loadbalancer : Copying over haproxy single external frontend config] ***** 2026-01-17 00:56:41.137808 | orchestrator | Saturday 17 January 2026 00:50:42 +0000 (0:00:04.510) 0:00:45.767 ****** 2026-01-17 00:56:41.137816 | orchestrator | skipping: [testbed-node-0] 2026-01-17 00:56:41.137823 | orchestrator | skipping: [testbed-node-2] 2026-01-17 00:56:41.137830 | orchestrator | skipping: [testbed-node-1] 2026-01-17 00:56:41.137837 | orchestrator | 2026-01-17 00:56:41.137845 | orchestrator | TASK [loadbalancer : Copying over custom haproxy services configuration] ******* 2026-01-17 00:56:41.137852 | orchestrator | Saturday 17 January 2026 00:50:43 +0000 (0:00:00.564) 0:00:46.332 ****** 2026-01-17 00:56:41.137865 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2026-01-17 00:56:41.137873 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2026-01-17 00:56:41.137881 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2026-01-17 00:56:41.137888 | orchestrator | 2026-01-17 00:56:41.137895 | orchestrator | TASK [loadbalancer : Copying over keepalived.conf] ***************************** 2026-01-17 00:56:41.137903 | orchestrator | Saturday 17 January 2026 00:50:45 +0000 (0:00:02.912) 0:00:49.244 ****** 2026-01-17 00:56:41.137910 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2026-01-17 00:56:41.137917 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2026-01-17 00:56:41.137924 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2026-01-17 00:56:41.137932 | orchestrator | 2026-01-17 00:56:41.137939 | orchestrator | TASK [loadbalancer : Copying over haproxy.pem] ********************************* 2026-01-17 00:56:41.137946 | orchestrator | Saturday 17 January 2026 00:50:48 +0000 (0:00:02.812) 0:00:52.057 ****** 2026-01-17 00:56:41.137953 | orchestrator | changed: [testbed-node-0] => (item=haproxy.pem) 2026-01-17 00:56:41.137961 | orchestrator | changed: [testbed-node-1] => (item=haproxy.pem) 2026-01-17 00:56:41.137968 | orchestrator | changed: [testbed-node-2] => (item=haproxy.pem) 2026-01-17 00:56:41.137975 | orchestrator | 2026-01-17 00:56:41.137982 | orchestrator | TASK [loadbalancer : Copying over haproxy-internal.pem] ************************ 2026-01-17 00:56:41.137989 | orchestrator | Saturday 17 January 2026 00:50:50 +0000 (0:00:01.603) 0:00:53.660 ****** 2026-01-17 00:56:41.137996 | orchestrator | changed: [testbed-node-0] => (item=haproxy-internal.pem) 2026-01-17 00:56:41.138003 | orchestrator | changed: [testbed-node-1] => (item=haproxy-internal.pem) 2026-01-17 00:56:41.138010 | orchestrator | changed: [testbed-node-2] => (item=haproxy-internal.pem) 2026-01-17 00:56:41.138137 | orchestrator | 2026-01-17 00:56:41.138154 | orchestrator | TASK [loadbalancer : include_tasks] ******************************************** 2026-01-17 00:56:41.138162 | orchestrator | Saturday 17 January 2026 00:50:51 +0000 (0:00:01.468) 0:00:55.129 ****** 2026-01-17 00:56:41.138169 | orchestrator | included: /ansible/roles/loadbalancer/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-17 00:56:41.138176 | orchestrator | 2026-01-17 00:56:41.138183 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over extra CA certificates] *** 2026-01-17 00:56:41.138190 | orchestrator | Saturday 17 January 2026 00:50:52 +0000 (0:00:00.977) 0:00:56.107 ****** 2026-01-17 00:56:41.138199 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-01-17 00:56:41.138208 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-01-17 00:56:41.138305 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-01-17 00:56:41.138323 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-01-17 00:56:41.138331 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-01-17 00:56:41.138338 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-01-17 00:56:41.138353 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-01-17 00:56:41.138361 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-01-17 00:56:41.138369 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-01-17 00:56:41.138376 | orchestrator | 2026-01-17 00:56:41.138384 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over backend internal TLS certificate] *** 2026-01-17 00:56:41.138684 | orchestrator | Saturday 17 January 2026 00:50:56 +0000 (0:00:03.635) 0:00:59.742 ****** 2026-01-17 00:56:41.138719 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-01-17 00:56:41.138734 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-17 00:56:41.138743 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-17 00:56:41.138757 | orchestrator | skipping: [testbed-node-2] 2026-01-17 00:56:41.138766 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-01-17 00:56:41.138775 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-17 00:56:41.138784 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-17 00:56:41.138792 | orchestrator | skipping: [testbed-node-0] 2026-01-17 00:56:41.138799 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-01-17 00:56:41.138833 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-17 00:56:41.138843 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-17 00:56:41.138850 | orchestrator | skipping: [testbed-node-1] 2026-01-17 00:56:41.138858 | orchestrator | 2026-01-17 00:56:41.138870 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over backend internal TLS key] *** 2026-01-17 00:56:41.138878 | orchestrator | Saturday 17 January 2026 00:50:57 +0000 (0:00:01.559) 0:01:01.301 ****** 2026-01-17 00:56:41.138886 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-01-17 00:56:41.138894 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-17 00:56:41.138901 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-17 00:56:41.138909 | orchestrator | skipping: [testbed-node-0] 2026-01-17 00:56:41.138917 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-01-17 00:56:41.138942 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-17 00:56:41.138955 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-17 00:56:41.138967 | orchestrator | skipping: [testbed-node-1] 2026-01-17 00:56:41.138986 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-01-17 00:56:41.141523 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-17 00:56:41.141574 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-17 00:56:41.141583 | orchestrator | skipping: [testbed-node-2] 2026-01-17 00:56:41.141590 | orchestrator | 2026-01-17 00:56:41.141598 | orchestrator | TASK [service-cert-copy : mariadb | Copying over extra CA certificates] ******** 2026-01-17 00:56:41.141606 | orchestrator | Saturday 17 January 2026 00:50:58 +0000 (0:00:00.860) 0:01:02.162 ****** 2026-01-17 00:56:41.141613 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-01-17 00:56:41.141638 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-17 00:56:41.141723 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-17 00:56:41.141732 | orchestrator | skipping: [testbed-node-0] 2026-01-17 00:56:41.141749 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-01-17 00:56:41.141756 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-17 00:56:41.141763 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-17 00:56:41.141770 | orchestrator | skipping: [testbed-node-1] 2026-01-17 00:56:41.141777 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-01-17 00:56:41.141784 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-17 00:56:41.141797 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-17 00:56:41.141804 | orchestrator | skipping: [testbed-node-2] 2026-01-17 00:56:41.141811 | orchestrator | 2026-01-17 00:56:41.141818 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS certificate] *** 2026-01-17 00:56:41.141825 | orchestrator | Saturday 17 January 2026 00:50:59 +0000 (0:00:00.859) 0:01:03.021 ****** 2026-01-17 00:56:41.141839 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-01-17 00:56:41.141846 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-17 00:56:41.141853 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-17 00:56:41.141860 | orchestrator | skipping: [testbed-node-0] 2026-01-17 00:56:41.141867 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-01-17 00:56:41.141901 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-17 00:56:41.141909 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-17 00:56:41.141916 | orchestrator | skipping: [testbed-node-1] 2026-01-17 00:56:41.141929 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-01-17 00:56:41.141944 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-17 00:56:41.141951 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-17 00:56:41.141958 | orchestrator | skipping: [testbed-node-2] 2026-01-17 00:56:41.141965 | orchestrator | 2026-01-17 00:56:41.141972 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS key] ***** 2026-01-17 00:56:41.141979 | orchestrator | Saturday 17 January 2026 00:51:00 +0000 (0:00:00.616) 0:01:03.638 ****** 2026-01-17 00:56:41.141985 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-01-17 00:56:41.141993 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-17 00:56:41.141999 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-17 00:56:41.142006 | orchestrator | skipping: [testbed-node-0] 2026-01-17 00:56:41.142108 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-01-17 00:56:41.142128 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-17 00:56:41.142137 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-17 00:56:41.142145 | orchestrator | skipping: [testbed-node-1] 2026-01-17 00:56:41.142153 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-01-17 00:56:41.142161 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-17 00:56:41.142169 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-17 00:56:41.142176 | orchestrator | skipping: [testbed-node-2] 2026-01-17 00:56:41.142184 | orchestrator | 2026-01-17 00:56:41.142191 | orchestrator | TASK [service-cert-copy : proxysql | Copying over extra CA certificates] ******* 2026-01-17 00:56:41.142199 | orchestrator | Saturday 17 January 2026 00:51:01 +0000 (0:00:01.066) 0:01:04.705 ****** 2026-01-17 00:56:41.142207 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-01-17 00:56:41.142229 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-17 00:56:41.142263 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-17 00:56:41.142272 | orchestrator | skipping: [testbed-node-0] 2026-01-17 00:56:41.142280 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-01-17 00:56:41.142287 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-17 00:56:41.142295 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-17 00:56:41.142302 | orchestrator | skipping: [testbed-node-1] 2026-01-17 00:56:41.142310 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-01-17 00:56:41.142327 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-17 00:56:41.142342 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-17 00:56:41.142354 | orchestrator | skipping: [testbed-node-2] 2026-01-17 00:56:41.142365 | orchestrator | 2026-01-17 00:56:41.142377 | orchestrator | TASK [service-cert-copy : proxysql | Copying over backend internal TLS certificate] *** 2026-01-17 00:56:41.142389 | orchestrator | Saturday 17 January 2026 00:51:02 +0000 (0:00:00.959) 0:01:05.665 ****** 2026-01-17 00:56:41.142402 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-01-17 00:56:41.142422 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-17 00:56:41.142436 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-17 00:56:41.142447 | orchestrator | skipping: [testbed-node-0] 2026-01-17 00:56:41.142457 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-01-17 00:56:41.142476 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-17 00:56:41.142497 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-17 00:56:41.142509 | orchestrator | skipping: [testbed-node-1] 2026-01-17 00:56:41.142526 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-01-17 00:56:41.142537 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-17 00:56:41.142547 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-17 00:56:41.142559 | orchestrator | skipping: [testbed-node-2] 2026-01-17 00:56:41.142683 | orchestrator | 2026-01-17 00:56:41.142696 | orchestrator | TASK [service-cert-copy : proxysql | Copying over backend internal TLS key] **** 2026-01-17 00:56:41.142710 | orchestrator | Saturday 17 January 2026 00:51:03 +0000 (0:00:00.805) 0:01:06.471 ****** 2026-01-17 00:56:41.142723 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-01-17 00:56:41.142758 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-17 00:56:41.142771 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-17 00:56:41.142784 | orchestrator | skipping: [testbed-node-0] 2026-01-17 00:56:41.142808 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-01-17 00:56:41.142816 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-17 00:56:41.142833 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-17 00:56:41.142841 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-01-17 00:56:41.142854 | orchestrator | skipping: [testbed-node-1] 2026-01-17 00:56:41.142889 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-17 00:56:41.142902 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-17 00:56:41.142982 | orchestrator | skipping: [testbed-node-2] 2026-01-17 00:56:41.142998 | orchestrator | 2026-01-17 00:56:41.143008 | orchestrator | TASK [loadbalancer : Copying over haproxy start script] ************************ 2026-01-17 00:56:41.143021 | orchestrator | Saturday 17 January 2026 00:51:04 +0000 (0:00:00.857) 0:01:07.329 ****** 2026-01-17 00:56:41.143034 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2026-01-17 00:56:41.143048 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2026-01-17 00:56:41.143087 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2026-01-17 00:56:41.143096 | orchestrator | 2026-01-17 00:56:41.143103 | orchestrator | TASK [loadbalancer : Copying over proxysql start script] *********************** 2026-01-17 00:56:41.143110 | orchestrator | Saturday 17 January 2026 00:51:05 +0000 (0:00:01.951) 0:01:09.280 ****** 2026-01-17 00:56:41.143118 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2026-01-17 00:56:41.143125 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2026-01-17 00:56:41.143133 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2026-01-17 00:56:41.143140 | orchestrator | 2026-01-17 00:56:41.143148 | orchestrator | TASK [loadbalancer : Copying files for haproxy-ssh] **************************** 2026-01-17 00:56:41.143167 | orchestrator | Saturday 17 January 2026 00:51:07 +0000 (0:00:01.707) 0:01:10.988 ****** 2026-01-17 00:56:41.143175 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2026-01-17 00:56:41.143183 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2026-01-17 00:56:41.143190 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2026-01-17 00:56:41.143197 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-01-17 00:56:41.143204 | orchestrator | skipping: [testbed-node-0] 2026-01-17 00:56:41.143212 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-01-17 00:56:41.143219 | orchestrator | skipping: [testbed-node-1] 2026-01-17 00:56:41.143226 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-01-17 00:56:41.143233 | orchestrator | skipping: [testbed-node-2] 2026-01-17 00:56:41.143240 | orchestrator | 2026-01-17 00:56:41.143247 | orchestrator | TASK [loadbalancer : Check loadbalancer containers] **************************** 2026-01-17 00:56:41.143263 | orchestrator | Saturday 17 January 2026 00:51:08 +0000 (0:00:00.918) 0:01:11.906 ****** 2026-01-17 00:56:41.143347 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-01-17 00:56:41.143359 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-01-17 00:56:41.143367 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-01-17 00:56:41.143381 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-01-17 00:56:41.143393 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-01-17 00:56:41.143400 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-01-17 00:56:41.143414 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-01-17 00:56:41.143422 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-01-17 00:56:41.143429 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-01-17 00:56:41.143437 | orchestrator | 2026-01-17 00:56:41.143445 | orchestrator | TASK [include_role : aodh] ***************************************************** 2026-01-17 00:56:41.143452 | orchestrator | Saturday 17 January 2026 00:51:11 +0000 (0:00:02.977) 0:01:14.884 ****** 2026-01-17 00:56:41.143459 | orchestrator | included: aodh for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-17 00:56:41.143467 | orchestrator | 2026-01-17 00:56:41.143474 | orchestrator | TASK [haproxy-config : Copying over aodh haproxy config] *********************** 2026-01-17 00:56:41.143481 | orchestrator | Saturday 17 January 2026 00:51:12 +0000 (0:00:00.620) 0:01:15.504 ****** 2026-01-17 00:56:41.143490 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-01-17 00:56:41.143508 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-01-17 00:56:41.143517 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-01-17 00:56:41.143533 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-01-17 00:56:41.143541 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-01-17 00:56:41.143548 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-01-17 00:56:41.143556 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-01-17 00:56:41.143572 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-01-17 00:56:41.143580 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-01-17 00:56:41.143593 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-01-17 00:56:41.143601 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-01-17 00:56:41.143612 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-01-17 00:56:41.143624 | orchestrator | 2026-01-17 00:56:41.143636 | orchestrator | TASK [haproxy-config : Add configuration for aodh when using single external frontend] *** 2026-01-17 00:56:41.143647 | orchestrator | Saturday 17 January 2026 00:51:16 +0000 (0:00:04.487) 0:01:19.991 ****** 2026-01-17 00:56:41.143659 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-01-17 00:56:41.143680 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-01-17 00:56:41.143700 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-01-17 00:56:41.143725 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-01-17 00:56:41.143738 | orchestrator | skipping: [testbed-node-0] 2026-01-17 00:56:41.143749 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-01-17 00:56:41.143763 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-01-17 00:56:41.143775 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-01-17 00:56:41.143788 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-01-17 00:56:41.143800 | orchestrator | skipping: [testbed-node-1] 2026-01-17 00:56:41.143825 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-01-17 00:56:41.143848 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-01-17 00:56:41.143862 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-01-17 00:56:41.143875 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-01-17 00:56:41.143889 | orchestrator | skipping: [testbed-node-2] 2026-01-17 00:56:41.143900 | orchestrator | 2026-01-17 00:56:41.143913 | orchestrator | TASK [haproxy-config : Configuring firewall for aodh] ************************** 2026-01-17 00:56:41.143925 | orchestrator | Saturday 17 January 2026 00:51:17 +0000 (0:00:01.258) 0:01:21.250 ****** 2026-01-17 00:56:41.143939 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2026-01-17 00:56:41.143955 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2026-01-17 00:56:41.143969 | orchestrator | skipping: [testbed-node-1] 2026-01-17 00:56:41.143982 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2026-01-17 00:56:41.143995 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2026-01-17 00:56:41.144009 | orchestrator | skipping: [testbed-node-0] 2026-01-17 00:56:41.144022 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2026-01-17 00:56:41.144032 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2026-01-17 00:56:41.144046 | orchestrator | skipping: [testbed-node-2] 2026-01-17 00:56:41.144072 | orchestrator | 2026-01-17 00:56:41.144087 | orchestrator | TASK [proxysql-config : Copying over aodh ProxySQL users config] *************** 2026-01-17 00:56:41.144095 | orchestrator | Saturday 17 January 2026 00:51:19 +0000 (0:00:01.070) 0:01:22.321 ****** 2026-01-17 00:56:41.144102 | orchestrator | changed: [testbed-node-0] 2026-01-17 00:56:41.144110 | orchestrator | changed: [testbed-node-1] 2026-01-17 00:56:41.144117 | orchestrator | changed: [testbed-node-2] 2026-01-17 00:56:41.144124 | orchestrator | 2026-01-17 00:56:41.144131 | orchestrator | TASK [proxysql-config : Copying over aodh ProxySQL rules config] *************** 2026-01-17 00:56:41.144138 | orchestrator | Saturday 17 January 2026 00:51:20 +0000 (0:00:01.578) 0:01:23.899 ****** 2026-01-17 00:56:41.144145 | orchestrator | changed: [testbed-node-0] 2026-01-17 00:56:41.144158 | orchestrator | changed: [testbed-node-1] 2026-01-17 00:56:41.144166 | orchestrator | changed: [testbed-node-2] 2026-01-17 00:56:41.144173 | orchestrator | 2026-01-17 00:56:41.144180 | orchestrator | TASK [include_role : barbican] ************************************************* 2026-01-17 00:56:41.144187 | orchestrator | Saturday 17 January 2026 00:51:22 +0000 (0:00:02.411) 0:01:26.310 ****** 2026-01-17 00:56:41.144194 | orchestrator | included: barbican for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-17 00:56:41.144201 | orchestrator | 2026-01-17 00:56:41.144208 | orchestrator | TASK [haproxy-config : Copying over barbican haproxy config] ******************* 2026-01-17 00:56:41.144216 | orchestrator | Saturday 17 January 2026 00:51:24 +0000 (0:00:01.152) 0:01:27.463 ****** 2026-01-17 00:56:41.144224 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-01-17 00:56:41.144232 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-01-17 00:56:41.144240 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-01-17 00:56:41.144248 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-01-17 00:56:41.144270 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-01-17 00:56:41.144278 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-01-17 00:56:41.144286 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-01-17 00:56:41.144293 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-01-17 00:56:41.144301 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-01-17 00:56:41.144313 | orchestrator | 2026-01-17 00:56:41.144321 | orchestrator | TASK [haproxy-config : Add configuration for barbican when using single external frontend] *** 2026-01-17 00:56:41.144328 | orchestrator | Saturday 17 January 2026 00:51:30 +0000 (0:00:06.596) 0:01:34.059 ****** 2026-01-17 00:56:41.144341 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-01-17 00:56:41.144353 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-01-17 00:56:41.144361 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-01-17 00:56:41.144368 | orchestrator | skipping: [testbed-node-0] 2026-01-17 00:56:41.144376 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-01-17 00:56:41.144384 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-01-17 00:56:41.144396 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-01-17 00:56:41.144403 | orchestrator | skipping: [testbed-node-1] 2026-01-17 00:56:41.144420 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-01-17 00:56:41.144428 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-01-17 00:56:41.144435 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-01-17 00:56:41.144443 | orchestrator | skipping: [testbed-node-2] 2026-01-17 00:56:41.144450 | orchestrator | 2026-01-17 00:56:41.144457 | orchestrator | TASK [haproxy-config : Configuring firewall for barbican] ********************** 2026-01-17 00:56:41.144465 | orchestrator | Saturday 17 January 2026 00:51:31 +0000 (0:00:00.552) 0:01:34.612 ****** 2026-01-17 00:56:41.144473 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-01-17 00:56:41.144481 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-01-17 00:56:41.144496 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-01-17 00:56:41.144504 | orchestrator | skipping: [testbed-node-1] 2026-01-17 00:56:41.144511 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-01-17 00:56:41.144518 | orchestrator | skipping: [testbed-node-0] 2026-01-17 00:56:41.144526 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-01-17 00:56:41.144533 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-01-17 00:56:41.144540 | orchestrator | skipping: [testbed-node-2] 2026-01-17 00:56:41.144547 | orchestrator | 2026-01-17 00:56:41.144554 | orchestrator | TASK [proxysql-config : Copying over barbican ProxySQL users config] *********** 2026-01-17 00:56:41.144562 | orchestrator | Saturday 17 January 2026 00:51:32 +0000 (0:00:00.901) 0:01:35.514 ****** 2026-01-17 00:56:41.144569 | orchestrator | changed: [testbed-node-0] 2026-01-17 00:56:41.144576 | orchestrator | changed: [testbed-node-1] 2026-01-17 00:56:41.144583 | orchestrator | changed: [testbed-node-2] 2026-01-17 00:56:41.144590 | orchestrator | 2026-01-17 00:56:41.144597 | orchestrator | TASK [proxysql-config : Copying over barbican ProxySQL rules config] *********** 2026-01-17 00:56:41.144605 | orchestrator | Saturday 17 January 2026 00:51:33 +0000 (0:00:01.483) 0:01:36.997 ****** 2026-01-17 00:56:41.144612 | orchestrator | changed: [testbed-node-0] 2026-01-17 00:56:41.144619 | orchestrator | changed: [testbed-node-1] 2026-01-17 00:56:41.144626 | orchestrator | changed: [testbed-node-2] 2026-01-17 00:56:41.144633 | orchestrator | 2026-01-17 00:56:41.144644 | orchestrator | TASK [include_role : blazar] *************************************************** 2026-01-17 00:56:41.144652 | orchestrator | Saturday 17 January 2026 00:51:35 +0000 (0:00:02.148) 0:01:39.146 ****** 2026-01-17 00:56:41.144659 | orchestrator | skipping: [testbed-node-0] 2026-01-17 00:56:41.144667 | orchestrator | skipping: [testbed-node-1] 2026-01-17 00:56:41.144674 | orchestrator | skipping: [testbed-node-2] 2026-01-17 00:56:41.144681 | orchestrator | 2026-01-17 00:56:41.144688 | orchestrator | TASK [include_role : ceph-rgw] ************************************************* 2026-01-17 00:56:41.144695 | orchestrator | Saturday 17 January 2026 00:51:36 +0000 (0:00:00.316) 0:01:39.462 ****** 2026-01-17 00:56:41.144707 | orchestrator | included: ceph-rgw for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-17 00:56:41.144714 | orchestrator | 2026-01-17 00:56:41.144721 | orchestrator | TASK [haproxy-config : Copying over ceph-rgw haproxy config] ******************* 2026-01-17 00:56:41.144729 | orchestrator | Saturday 17 January 2026 00:51:37 +0000 (0:00:00.902) 0:01:40.365 ****** 2026-01-17 00:56:41.144737 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}}}}) 2026-01-17 00:56:41.144751 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}}}}) 2026-01-17 00:56:41.144759 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}}}}) 2026-01-17 00:56:41.144766 | orchestrator | 2026-01-17 00:56:41.144773 | orchestrator | TASK [haproxy-config : Add configuration for ceph-rgw when using single external frontend] *** 2026-01-17 00:56:41.144781 | orchestrator | Saturday 17 January 2026 00:51:39 +0000 (0:00:02.832) 0:01:43.197 ****** 2026-01-17 00:56:41.144793 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}}}})  2026-01-17 00:56:41.144801 | orchestrator | skipping: [testbed-node-0] 2026-01-17 00:56:41.144813 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}}}})  2026-01-17 00:56:41.144820 | orchestrator | skipping: [testbed-node-1] 2026-01-17 00:56:41.144828 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}}}})  2026-01-17 00:56:41.144844 | orchestrator | skipping: [testbed-node-2] 2026-01-17 00:56:41.144851 | orchestrator | 2026-01-17 00:56:41.144858 | orchestrator | TASK [haproxy-config : Configuring firewall for ceph-rgw] ********************** 2026-01-17 00:56:41.144865 | orchestrator | Saturday 17 January 2026 00:51:42 +0000 (0:00:02.442) 0:01:45.640 ****** 2026-01-17 00:56:41.144874 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}})  2026-01-17 00:56:41.144884 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}})  2026-01-17 00:56:41.144893 | orchestrator | skipping: [testbed-node-0] 2026-01-17 00:56:41.144900 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}})  2026-01-17 00:56:41.144908 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}})  2026-01-17 00:56:41.144915 | orchestrator | skipping: [testbed-node-1] 2026-01-17 00:56:41.144928 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}})  2026-01-17 00:56:41.144941 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}})  2026-01-17 00:56:41.144949 | orchestrator | skipping: [testbed-node-2] 2026-01-17 00:56:41.144956 | orchestrator | 2026-01-17 00:56:41.144963 | orchestrator | TASK [proxysql-config : Copying over ceph-rgw ProxySQL users config] *********** 2026-01-17 00:56:41.144970 | orchestrator | Saturday 17 January 2026 00:51:45 +0000 (0:00:02.896) 0:01:48.536 ****** 2026-01-17 00:56:41.144983 | orchestrator | skipping: [testbed-node-0] 2026-01-17 00:56:41.144990 | orchestrator | skipping: [testbed-node-1] 2026-01-17 00:56:41.144997 | orchestrator | skipping: [testbed-node-2] 2026-01-17 00:56:41.145004 | orchestrator | 2026-01-17 00:56:41.145011 | orchestrator | TASK [proxysql-config : Copying over ceph-rgw ProxySQL rules config] *********** 2026-01-17 00:56:41.145018 | orchestrator | Saturday 17 January 2026 00:51:45 +0000 (0:00:00.572) 0:01:49.109 ****** 2026-01-17 00:56:41.145026 | orchestrator | skipping: [testbed-node-0] 2026-01-17 00:56:41.145033 | orchestrator | skipping: [testbed-node-2] 2026-01-17 00:56:41.145040 | orchestrator | skipping: [testbed-node-1] 2026-01-17 00:56:41.145047 | orchestrator | 2026-01-17 00:56:41.145072 | orchestrator | TASK [include_role : cinder] *************************************************** 2026-01-17 00:56:41.145080 | orchestrator | Saturday 17 January 2026 00:51:46 +0000 (0:00:00.953) 0:01:50.062 ****** 2026-01-17 00:56:41.145087 | orchestrator | included: cinder for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-17 00:56:41.145094 | orchestrator | 2026-01-17 00:56:41.145102 | orchestrator | TASK [haproxy-config : Copying over cinder haproxy config] ********************* 2026-01-17 00:56:41.145109 | orchestrator | Saturday 17 January 2026 00:51:47 +0000 (0:00:00.660) 0:01:50.722 ****** 2026-01-17 00:56:41.145116 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-01-17 00:56:41.145124 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-01-17 00:56:41.145132 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-01-17 00:56:41.145146 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-01-17 00:56:41.145164 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-01-17 00:56:41.145172 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-01-17 00:56:41.145180 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-01-17 00:56:41.145187 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-01-17 00:56:41.145199 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-01-17 00:56:41.145211 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-01-17 00:56:41.145224 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-01-17 00:56:41.145231 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-01-17 00:56:41.145239 | orchestrator | 2026-01-17 00:56:41.145246 | orchestrator | TASK [haproxy-config : Add configuration for cinder when using single external frontend] *** 2026-01-17 00:56:41.145254 | orchestrator | Saturday 17 January 2026 00:51:52 +0000 (0:00:05.553) 0:01:56.276 ****** 2026-01-17 00:56:41.145261 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-01-17 00:56:41.145270 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-01-17 00:56:41.145290 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-01-17 00:56:41.145315 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-01-17 00:56:41.145327 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-01-17 00:56:41.145339 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-01-17 00:56:41.145352 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-01-17 00:56:41.145364 | orchestrator | skipping: [testbed-node-1] 2026-01-17 00:56:41.145375 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-01-17 00:56:41.145395 | orchestrator | skipping: [testbed-node-0] 2026-01-17 00:56:41.145907 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-01-17 00:56:41.145943 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-01-17 00:56:41.145956 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-01-17 00:56:41.145969 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-01-17 00:56:41.145980 | orchestrator | skipping: [testbed-node-2] 2026-01-17 00:56:41.145993 | orchestrator | 2026-01-17 00:56:41.146006 | orchestrator | TASK [haproxy-config : Configuring firewall for cinder] ************************ 2026-01-17 00:56:41.146100 | orchestrator | Saturday 17 January 2026 00:51:54 +0000 (0:00:01.204) 0:01:57.481 ****** 2026-01-17 00:56:41.146110 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-01-17 00:56:41.146120 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-01-17 00:56:41.146138 | orchestrator | skipping: [testbed-node-0] 2026-01-17 00:56:41.146146 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-01-17 00:56:41.146154 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-01-17 00:56:41.146161 | orchestrator | skipping: [testbed-node-1] 2026-01-17 00:56:41.146169 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-01-17 00:56:41.147745 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-01-17 00:56:41.147778 | orchestrator | skipping: [testbed-node-2] 2026-01-17 00:56:41.147787 | orchestrator | 2026-01-17 00:56:41.147794 | orchestrator | TASK [proxysql-config : Copying over cinder ProxySQL users config] ************* 2026-01-17 00:56:41.147802 | orchestrator | Saturday 17 January 2026 00:51:55 +0000 (0:00:01.471) 0:01:58.953 ****** 2026-01-17 00:56:41.147815 | orchestrator | changed: [testbed-node-0] 2026-01-17 00:56:41.147828 | orchestrator | changed: [testbed-node-1] 2026-01-17 00:56:41.147839 | orchestrator | changed: [testbed-node-2] 2026-01-17 00:56:41.147851 | orchestrator | 2026-01-17 00:56:41.147977 | orchestrator | TASK [proxysql-config : Copying over cinder ProxySQL rules config] ************* 2026-01-17 00:56:41.147986 | orchestrator | Saturday 17 January 2026 00:51:56 +0000 (0:00:01.310) 0:02:00.264 ****** 2026-01-17 00:56:41.147993 | orchestrator | changed: [testbed-node-0] 2026-01-17 00:56:41.148001 | orchestrator | changed: [testbed-node-1] 2026-01-17 00:56:41.148008 | orchestrator | changed: [testbed-node-2] 2026-01-17 00:56:41.148015 | orchestrator | 2026-01-17 00:56:41.148022 | orchestrator | TASK [include_role : cloudkitty] *********************************************** 2026-01-17 00:56:41.148029 | orchestrator | Saturday 17 January 2026 00:51:59 +0000 (0:00:02.218) 0:02:02.482 ****** 2026-01-17 00:56:41.148037 | orchestrator | skipping: [testbed-node-0] 2026-01-17 00:56:41.148044 | orchestrator | skipping: [testbed-node-1] 2026-01-17 00:56:41.148051 | orchestrator | skipping: [testbed-node-2] 2026-01-17 00:56:41.148079 | orchestrator | 2026-01-17 00:56:41.148087 | orchestrator | TASK [include_role : cyborg] *************************************************** 2026-01-17 00:56:41.148094 | orchestrator | Saturday 17 January 2026 00:51:59 +0000 (0:00:00.428) 0:02:02.910 ****** 2026-01-17 00:56:41.148101 | orchestrator | skipping: [testbed-node-0] 2026-01-17 00:56:41.148116 | orchestrator | skipping: [testbed-node-1] 2026-01-17 00:56:41.148124 | orchestrator | skipping: [testbed-node-2] 2026-01-17 00:56:41.148131 | orchestrator | 2026-01-17 00:56:41.148138 | orchestrator | TASK [include_role : designate] ************************************************ 2026-01-17 00:56:41.148145 | orchestrator | Saturday 17 January 2026 00:51:59 +0000 (0:00:00.295) 0:02:03.206 ****** 2026-01-17 00:56:41.148152 | orchestrator | included: designate for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-17 00:56:41.148159 | orchestrator | 2026-01-17 00:56:41.148166 | orchestrator | TASK [haproxy-config : Copying over designate haproxy config] ****************** 2026-01-17 00:56:41.148173 | orchestrator | Saturday 17 January 2026 00:52:00 +0000 (0:00:00.836) 0:02:04.042 ****** 2026-01-17 00:56:41.148182 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-01-17 00:56:41.148202 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-01-17 00:56:41.148210 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-01-17 00:56:41.148234 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-01-17 00:56:41.148246 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-01-17 00:56:41.148254 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-01-17 00:56:41.148261 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-01-17 00:56:41.148273 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-01-17 00:56:41.148281 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-01-17 00:56:41.148294 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-01-17 00:56:41.148304 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-01-17 00:56:41.148312 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-01-17 00:56:41.148320 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-01-17 00:56:41.148331 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-01-17 00:56:41.148339 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-01-17 00:56:41.148346 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-01-17 00:56:41.148360 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-01-17 00:56:41.148368 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-01-17 00:56:41.148377 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-01-17 00:56:41.148389 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-01-17 00:56:41.148398 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-01-17 00:56:41.148406 | orchestrator | 2026-01-17 00:56:41.148414 | orchestrator | TASK [haproxy-config : Add configuration for designate when using single external frontend] *** 2026-01-17 00:56:41.148422 | orchestrator | Saturday 17 January 2026 00:52:05 +0000 (0:00:04.810) 0:02:08.853 ****** 2026-01-17 00:56:41.148448 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-01-17 00:56:41.148465 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-01-17 00:56:41.148474 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-01-17 00:56:41.148483 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-01-17 00:56:41.148496 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-01-17 00:56:41.148504 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-01-17 00:56:41.148513 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-01-17 00:56:41.148521 | orchestrator | skipping: [testbed-node-0] 2026-01-17 00:56:41.148535 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-01-17 00:56:41.148546 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-01-17 00:56:41.148554 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-01-17 00:56:41.148567 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-01-17 00:56:41.148576 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-01-17 00:56:41.148584 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-01-17 00:56:41.148597 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-01-17 00:56:41.148609 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-01-17 00:56:41.148618 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-01-17 00:56:41.148630 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-01-17 00:56:41.148639 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-01-17 00:56:41.148648 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-01-17 00:56:41.148656 | orchestrator | skipping: [testbed-node-1] 2026-01-17 00:56:41.148664 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-01-17 00:56:41.148678 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-01-17 00:56:41.148686 | orchestrator | skipping: [testbed-node-2] 2026-01-17 00:56:41.148694 | orchestrator | 2026-01-17 00:56:41.148703 | orchestrator | TASK [haproxy-config : Configuring firewall for designate] ********************* 2026-01-17 00:56:41.148711 | orchestrator | Saturday 17 January 2026 00:52:06 +0000 (0:00:00.854) 0:02:09.708 ****** 2026-01-17 00:56:41.148726 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2026-01-17 00:56:41.148736 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2026-01-17 00:56:41.148749 | orchestrator | skipping: [testbed-node-0] 2026-01-17 00:56:41.148756 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2026-01-17 00:56:41.148763 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2026-01-17 00:56:41.148771 | orchestrator | skipping: [testbed-node-1] 2026-01-17 00:56:41.148778 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2026-01-17 00:56:41.148785 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2026-01-17 00:56:41.148792 | orchestrator | skipping: [testbed-node-2] 2026-01-17 00:56:41.148799 | orchestrator | 2026-01-17 00:56:41.148806 | orchestrator | TASK [proxysql-config : Copying over designate ProxySQL users config] ********** 2026-01-17 00:56:41.148818 | orchestrator | Saturday 17 January 2026 00:52:07 +0000 (0:00:01.042) 0:02:10.750 ****** 2026-01-17 00:56:41.148830 | orchestrator | changed: [testbed-node-0] 2026-01-17 00:56:41.148841 | orchestrator | changed: [testbed-node-1] 2026-01-17 00:56:41.148853 | orchestrator | changed: [testbed-node-2] 2026-01-17 00:56:41.148865 | orchestrator | 2026-01-17 00:56:41.148876 | orchestrator | TASK [proxysql-config : Copying over designate ProxySQL rules config] ********** 2026-01-17 00:56:41.148886 | orchestrator | Saturday 17 January 2026 00:52:09 +0000 (0:00:01.905) 0:02:12.656 ****** 2026-01-17 00:56:41.148896 | orchestrator | changed: [testbed-node-0] 2026-01-17 00:56:41.148909 | orchestrator | changed: [testbed-node-1] 2026-01-17 00:56:41.148920 | orchestrator | changed: [testbed-node-2] 2026-01-17 00:56:41.148932 | orchestrator | 2026-01-17 00:56:41.148940 | orchestrator | TASK [include_role : etcd] ***************************************************** 2026-01-17 00:56:41.148947 | orchestrator | Saturday 17 January 2026 00:52:11 +0000 (0:00:02.011) 0:02:14.668 ****** 2026-01-17 00:56:41.148954 | orchestrator | skipping: [testbed-node-0] 2026-01-17 00:56:41.148961 | orchestrator | skipping: [testbed-node-1] 2026-01-17 00:56:41.148968 | orchestrator | skipping: [testbed-node-2] 2026-01-17 00:56:41.148976 | orchestrator | 2026-01-17 00:56:41.148983 | orchestrator | TASK [include_role : glance] *************************************************** 2026-01-17 00:56:41.148990 | orchestrator | Saturday 17 January 2026 00:52:11 +0000 (0:00:00.554) 0:02:15.223 ****** 2026-01-17 00:56:41.148997 | orchestrator | included: glance for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-17 00:56:41.149004 | orchestrator | 2026-01-17 00:56:41.149011 | orchestrator | TASK [haproxy-config : Copying over glance haproxy config] ********************* 2026-01-17 00:56:41.149018 | orchestrator | Saturday 17 January 2026 00:52:12 +0000 (0:00:00.803) 0:02:16.026 ****** 2026-01-17 00:56:41.149038 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-01-17 00:56:41.149084 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-01-17 00:56:41.149100 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-01-17 00:56:41.149129 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-01-17 00:56:41.149139 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-01-17 00:56:41.149156 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-01-17 00:56:41.149169 | orchestrator | 2026-01-17 00:56:41.149176 | orchestrator | TASK [haproxy-config : Add configuration for glance when using single external frontend] *** 2026-01-17 00:56:41.149184 | orchestrator | Saturday 17 January 2026 00:52:16 +0000 (0:00:04.203) 0:02:20.230 ****** 2026-01-17 00:56:41.149192 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-01-17 00:56:41.149208 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-01-17 00:56:41.149221 | orchestrator | skipping: [testbed-node-0] 2026-01-17 00:56:41.149234 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-01-17 00:56:41.149258 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-01-17 00:56:41.149276 | orchestrator | skipping: [testbed-node-1] 2026-01-17 00:56:41.149287 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-01-17 00:56:41.149304 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-01-17 00:56:41.149323 | orchestrator | skipping: [testbed-node-2] 2026-01-17 00:56:41.149334 | orchestrator | 2026-01-17 00:56:41.149345 | orchestrator | TASK [haproxy-config : Configuring firewall for glance] ************************ 2026-01-17 00:56:41.149356 | orchestrator | Saturday 17 January 2026 00:52:20 +0000 (0:00:03.613) 0:02:23.843 ****** 2026-01-17 00:56:41.149372 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-01-17 00:56:41.149384 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-01-17 00:56:41.149395 | orchestrator | skipping: [testbed-node-0] 2026-01-17 00:56:41.149407 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-01-17 00:56:41.149423 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-01-17 00:56:41.149436 | orchestrator | skipping: [testbed-node-1] 2026-01-17 00:56:41.149448 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-01-17 00:56:41.149460 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-01-17 00:56:41.149473 | orchestrator | skipping: [testbed-node-2] 2026-01-17 00:56:41.149480 | orchestrator | 2026-01-17 00:56:41.149487 | orchestrator | TASK [proxysql-config : Copying over glance ProxySQL users config] ************* 2026-01-17 00:56:41.149494 | orchestrator | Saturday 17 January 2026 00:52:23 +0000 (0:00:03.414) 0:02:27.257 ****** 2026-01-17 00:56:41.149502 | orchestrator | changed: [testbed-node-0] 2026-01-17 00:56:41.149509 | orchestrator | changed: [testbed-node-1] 2026-01-17 00:56:41.149516 | orchestrator | changed: [testbed-node-2] 2026-01-17 00:56:41.149523 | orchestrator | 2026-01-17 00:56:41.149530 | orchestrator | TASK [proxysql-config : Copying over glance ProxySQL rules config] ************* 2026-01-17 00:56:41.149537 | orchestrator | Saturday 17 January 2026 00:52:25 +0000 (0:00:01.449) 0:02:28.706 ****** 2026-01-17 00:56:41.149544 | orchestrator | changed: [testbed-node-0] 2026-01-17 00:56:41.149551 | orchestrator | changed: [testbed-node-1] 2026-01-17 00:56:41.149558 | orchestrator | changed: [testbed-node-2] 2026-01-17 00:56:41.149565 | orchestrator | 2026-01-17 00:56:41.149572 | orchestrator | TASK [include_role : gnocchi] ************************************************** 2026-01-17 00:56:41.149584 | orchestrator | Saturday 17 January 2026 00:52:27 +0000 (0:00:02.231) 0:02:30.938 ****** 2026-01-17 00:56:41.149592 | orchestrator | skipping: [testbed-node-0] 2026-01-17 00:56:41.149599 | orchestrator | skipping: [testbed-node-1] 2026-01-17 00:56:41.149606 | orchestrator | skipping: [testbed-node-2] 2026-01-17 00:56:41.149613 | orchestrator | 2026-01-17 00:56:41.149620 | orchestrator | TASK [include_role : grafana] ************************************************** 2026-01-17 00:56:41.149627 | orchestrator | Saturday 17 January 2026 00:52:28 +0000 (0:00:00.596) 0:02:31.534 ****** 2026-01-17 00:56:41.149634 | orchestrator | included: grafana for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-17 00:56:41.149641 | orchestrator | 2026-01-17 00:56:41.149654 | orchestrator | TASK [haproxy-config : Copying over grafana haproxy config] ******************** 2026-01-17 00:56:41.149662 | orchestrator | Saturday 17 January 2026 00:52:29 +0000 (0:00:00.865) 0:02:32.400 ****** 2026-01-17 00:56:41.149670 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-01-17 00:56:41.149678 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-01-17 00:56:41.149686 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-01-17 00:56:41.149698 | orchestrator | 2026-01-17 00:56:41.149706 | orchestrator | TASK [haproxy-config : Add configuration for grafana when using single external frontend] *** 2026-01-17 00:56:41.149713 | orchestrator | Saturday 17 January 2026 00:52:32 +0000 (0:00:03.452) 0:02:35.853 ****** 2026-01-17 00:56:41.149720 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-01-17 00:56:41.149731 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-01-17 00:56:41.149739 | orchestrator | skipping: [testbed-node-0] 2026-01-17 00:56:41.149746 | orchestrator | skipping: [testbed-node-1] 2026-01-17 00:56:41.149757 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-01-17 00:56:41.149765 | orchestrator | skipping: [testbed-node-2] 2026-01-17 00:56:41.149772 | orchestrator | 2026-01-17 00:56:41.149779 | orchestrator | TASK [haproxy-config : Configuring firewall for grafana] *********************** 2026-01-17 00:56:41.149786 | orchestrator | Saturday 17 January 2026 00:52:33 +0000 (0:00:00.799) 0:02:36.652 ****** 2026-01-17 00:56:41.149793 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2026-01-17 00:56:41.149801 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2026-01-17 00:56:41.149809 | orchestrator | skipping: [testbed-node-0] 2026-01-17 00:56:41.149822 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2026-01-17 00:56:41.149833 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2026-01-17 00:56:41.149852 | orchestrator | skipping: [testbed-node-1] 2026-01-17 00:56:41.149865 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2026-01-17 00:56:41.149876 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2026-01-17 00:56:41.149887 | orchestrator | skipping: [testbed-node-2] 2026-01-17 00:56:41.149898 | orchestrator | 2026-01-17 00:56:41.149910 | orchestrator | TASK [proxysql-config : Copying over grafana ProxySQL users config] ************ 2026-01-17 00:56:41.149922 | orchestrator | Saturday 17 January 2026 00:52:34 +0000 (0:00:00.674) 0:02:37.327 ****** 2026-01-17 00:56:41.149934 | orchestrator | changed: [testbed-node-0] 2026-01-17 00:56:41.149946 | orchestrator | changed: [testbed-node-1] 2026-01-17 00:56:41.149957 | orchestrator | changed: [testbed-node-2] 2026-01-17 00:56:41.149968 | orchestrator | 2026-01-17 00:56:41.149975 | orchestrator | TASK [proxysql-config : Copying over grafana ProxySQL rules config] ************ 2026-01-17 00:56:41.149983 | orchestrator | Saturday 17 January 2026 00:52:35 +0000 (0:00:01.392) 0:02:38.720 ****** 2026-01-17 00:56:41.149990 | orchestrator | changed: [testbed-node-0] 2026-01-17 00:56:41.149997 | orchestrator | changed: [testbed-node-1] 2026-01-17 00:56:41.150004 | orchestrator | changed: [testbed-node-2] 2026-01-17 00:56:41.150011 | orchestrator | 2026-01-17 00:56:41.150133 | orchestrator | TASK [include_role : heat] ***************************************************** 2026-01-17 00:56:41.150148 | orchestrator | Saturday 17 January 2026 00:52:37 +0000 (0:00:02.140) 0:02:40.860 ****** 2026-01-17 00:56:41.150156 | orchestrator | skipping: [testbed-node-0] 2026-01-17 00:56:41.150163 | orchestrator | skipping: [testbed-node-1] 2026-01-17 00:56:41.150170 | orchestrator | skipping: [testbed-node-2] 2026-01-17 00:56:41.150177 | orchestrator | 2026-01-17 00:56:41.150185 | orchestrator | TASK [include_role : horizon] ************************************************** 2026-01-17 00:56:41.150192 | orchestrator | Saturday 17 January 2026 00:52:38 +0000 (0:00:00.576) 0:02:41.437 ****** 2026-01-17 00:56:41.150199 | orchestrator | included: horizon for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-17 00:56:41.150206 | orchestrator | 2026-01-17 00:56:41.150213 | orchestrator | TASK [haproxy-config : Copying over horizon haproxy config] ******************** 2026-01-17 00:56:41.150220 | orchestrator | Saturday 17 January 2026 00:52:39 +0000 (0:00:00.957) 0:02:42.395 ****** 2026-01-17 00:56:41.150255 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-01-17 00:56:41.150273 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-01-17 00:56:41.150292 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-01-17 00:56:41.150306 | orchestrator | 2026-01-17 00:56:41.150313 | orchestrator | TASK [haproxy-config : Add configuration for horizon when using single external frontend] *** 2026-01-17 00:56:41.150320 | orchestrator | Saturday 17 January 2026 00:52:42 +0000 (0:00:03.922) 0:02:46.317 ****** 2026-01-17 00:56:41.150333 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-01-17 00:56:41.150342 | orchestrator | skipping: [testbed-node-0] 2026-01-17 00:56:41.150353 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-01-17 00:56:41.150366 | orchestrator | skipping: [testbed-node-1] 2026-01-17 00:56:41.150380 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-01-17 00:56:41.150388 | orchestrator | skipping: [testbed-node-2] 2026-01-17 00:56:41.150395 | orchestrator | 2026-01-17 00:56:41.150402 | orchestrator | TASK [haproxy-config : Configuring firewall for horizon] *********************** 2026-01-17 00:56:41.150413 | orchestrator | Saturday 17 January 2026 00:52:44 +0000 (0:00:01.185) 0:02:47.503 ****** 2026-01-17 00:56:41.150427 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-01-17 00:56:41.150436 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-01-17 00:56:41.150446 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-01-17 00:56:41.150454 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-01-17 00:56:41.150461 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2026-01-17 00:56:41.150469 | orchestrator | skipping: [testbed-node-0] 2026-01-17 00:56:41.150476 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-01-17 00:56:41.150484 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-01-17 00:56:41.150491 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-01-17 00:56:41.150499 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-01-17 00:56:41.150506 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2026-01-17 00:56:41.150514 | orchestrator | skipping: [testbed-node-1] 2026-01-17 00:56:41.150521 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-01-17 00:56:41.150541 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-01-17 00:56:41.150557 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-01-17 00:56:41.150565 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-01-17 00:56:41.150572 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2026-01-17 00:56:41.150579 | orchestrator | skipping: [testbed-node-2] 2026-01-17 00:56:41.150587 | orchestrator | 2026-01-17 00:56:41.150594 | orchestrator | TASK [proxysql-config : Copying over horizon ProxySQL users config] ************ 2026-01-17 00:56:41.150602 | orchestrator | Saturday 17 January 2026 00:52:45 +0000 (0:00:01.018) 0:02:48.522 ****** 2026-01-17 00:56:41.150609 | orchestrator | changed: [testbed-node-0] 2026-01-17 00:56:41.150616 | orchestrator | changed: [testbed-node-1] 2026-01-17 00:56:41.150623 | orchestrator | changed: [testbed-node-2] 2026-01-17 00:56:41.150630 | orchestrator | 2026-01-17 00:56:41.150638 | orchestrator | TASK [proxysql-config : Copying over horizon ProxySQL rules config] ************ 2026-01-17 00:56:41.150644 | orchestrator | Saturday 17 January 2026 00:52:46 +0000 (0:00:01.301) 0:02:49.823 ****** 2026-01-17 00:56:41.150651 | orchestrator | changed: [testbed-node-0] 2026-01-17 00:56:41.150658 | orchestrator | changed: [testbed-node-1] 2026-01-17 00:56:41.150664 | orchestrator | changed: [testbed-node-2] 2026-01-17 00:56:41.150675 | orchestrator | 2026-01-17 00:56:41.150687 | orchestrator | TASK [include_role : influxdb] ************************************************* 2026-01-17 00:56:41.150698 | orchestrator | Saturday 17 January 2026 00:52:48 +0000 (0:00:01.977) 0:02:51.801 ****** 2026-01-17 00:56:41.150710 | orchestrator | skipping: [testbed-node-0] 2026-01-17 00:56:41.150721 | orchestrator | skipping: [testbed-node-1] 2026-01-17 00:56:41.150733 | orchestrator | skipping: [testbed-node-2] 2026-01-17 00:56:41.150744 | orchestrator | 2026-01-17 00:56:41.150754 | orchestrator | TASK [include_role : ironic] *************************************************** 2026-01-17 00:56:41.150765 | orchestrator | Saturday 17 January 2026 00:52:48 +0000 (0:00:00.327) 0:02:52.128 ****** 2026-01-17 00:56:41.150776 | orchestrator | skipping: [testbed-node-0] 2026-01-17 00:56:41.150786 | orchestrator | skipping: [testbed-node-1] 2026-01-17 00:56:41.150797 | orchestrator | skipping: [testbed-node-2] 2026-01-17 00:56:41.150809 | orchestrator | 2026-01-17 00:56:41.150821 | orchestrator | TASK [include_role : keystone] ************************************************* 2026-01-17 00:56:41.150834 | orchestrator | Saturday 17 January 2026 00:52:49 +0000 (0:00:00.606) 0:02:52.735 ****** 2026-01-17 00:56:41.150846 | orchestrator | included: keystone for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-17 00:56:41.150894 | orchestrator | 2026-01-17 00:56:41.150902 | orchestrator | TASK [haproxy-config : Copying over keystone haproxy config] ******************* 2026-01-17 00:56:41.150909 | orchestrator | Saturday 17 January 2026 00:52:50 +0000 (0:00:01.048) 0:02:53.783 ****** 2026-01-17 00:56:41.150916 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-01-17 00:56:41.150936 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-01-17 00:56:41.150951 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-01-17 00:56:41.150959 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-01-17 00:56:41.150967 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-01-17 00:56:41.150974 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-01-17 00:56:41.150986 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-01-17 00:56:41.151002 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-01-17 00:56:41.151010 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-01-17 00:56:41.151017 | orchestrator | 2026-01-17 00:56:41.151051 | orchestrator | TASK [haproxy-config : Add configuration for keystone when using single external frontend] *** 2026-01-17 00:56:41.151085 | orchestrator | Saturday 17 January 2026 00:52:54 +0000 (0:00:03.662) 0:02:57.446 ****** 2026-01-17 00:56:41.151098 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-01-17 00:56:41.151161 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-01-17 00:56:41.151183 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-01-17 00:56:41.151196 | orchestrator | skipping: [testbed-node-0] 2026-01-17 00:56:41.151222 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-01-17 00:56:41.151236 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-01-17 00:56:41.151245 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-01-17 00:56:41.151251 | orchestrator | skipping: [testbed-node-1] 2026-01-17 00:56:41.151258 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-01-17 00:56:41.151271 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-01-17 00:56:41.151278 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-01-17 00:56:41.151285 | orchestrator | skipping: [testbed-node-2] 2026-01-17 00:56:41.151292 | orchestrator | 2026-01-17 00:56:41.151298 | orchestrator | TASK [haproxy-config : Configuring firewall for keystone] ********************** 2026-01-17 00:56:41.151311 | orchestrator | Saturday 17 January 2026 00:52:55 +0000 (0:00:01.008) 0:02:58.454 ****** 2026-01-17 00:56:41.151319 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-01-17 00:56:41.151330 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-01-17 00:56:41.151338 | orchestrator | skipping: [testbed-node-0] 2026-01-17 00:56:41.151345 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-01-17 00:56:41.151352 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-01-17 00:56:41.151359 | orchestrator | skipping: [testbed-node-1] 2026-01-17 00:56:41.151366 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-01-17 00:56:41.151373 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-01-17 00:56:41.151379 | orchestrator | skipping: [testbed-node-2] 2026-01-17 00:56:41.151386 | orchestrator | 2026-01-17 00:56:41.151393 | orchestrator | TASK [proxysql-config : Copying over keystone ProxySQL users config] *********** 2026-01-17 00:56:41.151400 | orchestrator | Saturday 17 January 2026 00:52:56 +0000 (0:00:00.876) 0:02:59.330 ****** 2026-01-17 00:56:41.151407 | orchestrator | changed: [testbed-node-0] 2026-01-17 00:56:41.151430 | orchestrator | changed: [testbed-node-1] 2026-01-17 00:56:41.151448 | orchestrator | changed: [testbed-node-2] 2026-01-17 00:56:41.151461 | orchestrator | 2026-01-17 00:56:41.151472 | orchestrator | TASK [proxysql-config : Copying over keystone ProxySQL rules config] *********** 2026-01-17 00:56:41.151483 | orchestrator | Saturday 17 January 2026 00:52:57 +0000 (0:00:01.427) 0:03:00.758 ****** 2026-01-17 00:56:41.151494 | orchestrator | changed: [testbed-node-0] 2026-01-17 00:56:41.151505 | orchestrator | changed: [testbed-node-1] 2026-01-17 00:56:41.151516 | orchestrator | changed: [testbed-node-2] 2026-01-17 00:56:41.151527 | orchestrator | 2026-01-17 00:56:41.151538 | orchestrator | TASK [include_role : letsencrypt] ********************************************** 2026-01-17 00:56:41.151548 | orchestrator | Saturday 17 January 2026 00:52:59 +0000 (0:00:02.377) 0:03:03.135 ****** 2026-01-17 00:56:41.151554 | orchestrator | skipping: [testbed-node-0] 2026-01-17 00:56:41.151561 | orchestrator | skipping: [testbed-node-1] 2026-01-17 00:56:41.151568 | orchestrator | skipping: [testbed-node-2] 2026-01-17 00:56:41.151574 | orchestrator | 2026-01-17 00:56:41.151581 | orchestrator | TASK [include_role : magnum] *************************************************** 2026-01-17 00:56:41.151587 | orchestrator | Saturday 17 January 2026 00:53:00 +0000 (0:00:00.582) 0:03:03.718 ****** 2026-01-17 00:56:41.151594 | orchestrator | included: magnum for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-17 00:56:41.151600 | orchestrator | 2026-01-17 00:56:41.151607 | orchestrator | TASK [haproxy-config : Copying over magnum haproxy config] ********************* 2026-01-17 00:56:41.151614 | orchestrator | Saturday 17 January 2026 00:53:01 +0000 (0:00:00.970) 0:03:04.688 ****** 2026-01-17 00:56:41.151621 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-01-17 00:56:41.151640 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-01-17 00:56:41.151649 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-01-17 00:56:41.151663 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-01-17 00:56:41.151670 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-01-17 00:56:41.151677 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-01-17 00:56:41.151687 | orchestrator | 2026-01-17 00:56:41.151699 | orchestrator | TASK [haproxy-config : Add configuration for magnum when using single external frontend] *** 2026-01-17 00:56:41.151709 | orchestrator | Saturday 17 January 2026 00:53:05 +0000 (0:00:03.690) 0:03:08.378 ****** 2026-01-17 00:56:41.151726 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-01-17 00:56:41.151738 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-01-17 00:56:41.151755 | orchestrator | skipping: [testbed-node-0] 2026-01-17 00:56:41.151765 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-01-17 00:56:41.151798 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-01-17 00:56:41.151815 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-01-17 00:56:41.151831 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-01-17 00:56:41.151842 | orchestrator | skipping: [testbed-node-1] 2026-01-17 00:56:41.151853 | orchestrator | skipping: [testbed-node-2] 2026-01-17 00:56:41.151872 | orchestrator | 2026-01-17 00:56:41.151882 | orchestrator | TASK [haproxy-config : Configuring firewall for magnum] ************************ 2026-01-17 00:56:41.151893 | orchestrator | Saturday 17 January 2026 00:53:06 +0000 (0:00:01.020) 0:03:09.399 ****** 2026-01-17 00:56:41.151906 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2026-01-17 00:56:41.151919 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2026-01-17 00:56:41.151931 | orchestrator | skipping: [testbed-node-0] 2026-01-17 00:56:41.151941 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2026-01-17 00:56:41.151948 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2026-01-17 00:56:41.151955 | orchestrator | skipping: [testbed-node-1] 2026-01-17 00:56:41.151962 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2026-01-17 00:56:41.151968 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2026-01-17 00:56:41.151975 | orchestrator | skipping: [testbed-node-2] 2026-01-17 00:56:41.151982 | orchestrator | 2026-01-17 00:56:41.151989 | orchestrator | TASK [proxysql-config : Copying over magnum ProxySQL users config] ************* 2026-01-17 00:56:41.151996 | orchestrator | Saturday 17 January 2026 00:53:07 +0000 (0:00:00.925) 0:03:10.325 ****** 2026-01-17 00:56:41.152002 | orchestrator | changed: [testbed-node-0] 2026-01-17 00:56:41.152009 | orchestrator | changed: [testbed-node-1] 2026-01-17 00:56:41.152015 | orchestrator | changed: [testbed-node-2] 2026-01-17 00:56:41.152022 | orchestrator | 2026-01-17 00:56:41.152029 | orchestrator | TASK [proxysql-config : Copying over magnum ProxySQL rules config] ************* 2026-01-17 00:56:41.152035 | orchestrator | Saturday 17 January 2026 00:53:08 +0000 (0:00:01.322) 0:03:11.647 ****** 2026-01-17 00:56:41.152042 | orchestrator | changed: [testbed-node-0] 2026-01-17 00:56:41.152052 | orchestrator | changed: [testbed-node-1] 2026-01-17 00:56:41.152124 | orchestrator | changed: [testbed-node-2] 2026-01-17 00:56:41.152135 | orchestrator | 2026-01-17 00:56:41.152147 | orchestrator | TASK [include_role : manila] *************************************************** 2026-01-17 00:56:41.152158 | orchestrator | Saturday 17 January 2026 00:53:10 +0000 (0:00:02.253) 0:03:13.901 ****** 2026-01-17 00:56:41.152169 | orchestrator | included: manila for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-17 00:56:41.152180 | orchestrator | 2026-01-17 00:56:41.152190 | orchestrator | TASK [haproxy-config : Copying over manila haproxy config] ********************* 2026-01-17 00:56:41.152200 | orchestrator | Saturday 17 January 2026 00:53:12 +0000 (0:00:01.449) 0:03:15.350 ****** 2026-01-17 00:56:41.152212 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-01-17 00:56:41.152252 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-01-17 00:56:41.152265 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-01-17 00:56:41.152276 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-01-17 00:56:41.152289 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-01-17 00:56:41.152300 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-01-17 00:56:41.152312 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-01-17 00:56:41.152336 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-01-17 00:56:41.152351 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-01-17 00:56:41.152364 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-01-17 00:56:41.152375 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-01-17 00:56:41.152386 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-01-17 00:56:41.152397 | orchestrator | 2026-01-17 00:56:41.152407 | orchestrator | TASK [haproxy-config : Add configuration for manila when using single external frontend] *** 2026-01-17 00:56:41.152418 | orchestrator | Saturday 17 January 2026 00:53:16 +0000 (0:00:04.327) 0:03:19.677 ****** 2026-01-17 00:56:41.152436 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-01-17 00:56:41.152459 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-01-17 00:56:41.152472 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-01-17 00:56:41.152484 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-01-17 00:56:41.152495 | orchestrator | skipping: [testbed-node-0] 2026-01-17 00:56:41.152507 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-01-17 00:56:41.152519 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-01-17 00:56:41.152530 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-01-17 00:56:41.152560 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-01-17 00:56:41.152572 | orchestrator | skipping: [testbed-node-1] 2026-01-17 00:56:41.152584 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-01-17 00:56:41.152595 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-01-17 00:56:41.152606 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-01-17 00:56:41.152616 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-01-17 00:56:41.152633 | orchestrator | skipping: [testbed-node-2] 2026-01-17 00:56:41.152644 | orchestrator | 2026-01-17 00:56:41.152654 | orchestrator | TASK [haproxy-config : Configuring firewall for manila] ************************ 2026-01-17 00:56:41.152665 | orchestrator | Saturday 17 January 2026 00:53:17 +0000 (0:00:00.750) 0:03:20.428 ****** 2026-01-17 00:56:41.152677 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2026-01-17 00:56:41.152687 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2026-01-17 00:56:41.152698 | orchestrator | skipping: [testbed-node-0] 2026-01-17 00:56:41.152707 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2026-01-17 00:56:41.152724 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mani2026-01-17 00:56:41 | INFO  | Task 6adcda3a-ea21-49a7-8993-d3a3658b387d is in state STARTED 2026-01-17 00:56:41.152736 | orchestrator | la_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2026-01-17 00:56:41.152746 | orchestrator | skipping: [testbed-node-1] 2026-01-17 00:56:41.152762 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2026-01-17 00:56:41.152773 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2026-01-17 00:56:41.152784 | orchestrator | skipping: [testbed-node-2] 2026-01-17 00:56:41.152796 | orchestrator | 2026-01-17 00:56:41.152807 | orchestrator | TASK [proxysql-config : Copying over manila ProxySQL users config] ************* 2026-01-17 00:56:41.152818 | orchestrator | Saturday 17 January 2026 00:53:18 +0000 (0:00:01.490) 0:03:21.919 ****** 2026-01-17 00:56:41.152829 | orchestrator | changed: [testbed-node-0] 2026-01-17 00:56:41.152841 | orchestrator | changed: [testbed-node-1] 2026-01-17 00:56:41.152852 | orchestrator | changed: [testbed-node-2] 2026-01-17 00:56:41.152863 | orchestrator | 2026-01-17 00:56:41.152874 | orchestrator | TASK [proxysql-config : Copying over manila ProxySQL rules config] ************* 2026-01-17 00:56:41.152886 | orchestrator | Saturday 17 January 2026 00:53:20 +0000 (0:00:01.411) 0:03:23.330 ****** 2026-01-17 00:56:41.152897 | orchestrator | changed: [testbed-node-0] 2026-01-17 00:56:41.152908 | orchestrator | changed: [testbed-node-1] 2026-01-17 00:56:41.152919 | orchestrator | changed: [testbed-node-2] 2026-01-17 00:56:41.152930 | orchestrator | 2026-01-17 00:56:41.152942 | orchestrator | TASK [include_role : mariadb] ************************************************** 2026-01-17 00:56:41.152953 | orchestrator | Saturday 17 January 2026 00:53:22 +0000 (0:00:02.232) 0:03:25.562 ****** 2026-01-17 00:56:41.152963 | orchestrator | included: mariadb for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-17 00:56:41.152974 | orchestrator | 2026-01-17 00:56:41.152983 | orchestrator | TASK [mariadb : Ensure mysql monitor user exist] ******************************* 2026-01-17 00:56:41.152994 | orchestrator | Saturday 17 January 2026 00:53:23 +0000 (0:00:01.371) 0:03:26.934 ****** 2026-01-17 00:56:41.153005 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-01-17 00:56:41.153016 | orchestrator | 2026-01-17 00:56:41.153026 | orchestrator | TASK [haproxy-config : Copying over mariadb haproxy config] ******************** 2026-01-17 00:56:41.153037 | orchestrator | Saturday 17 January 2026 00:53:26 +0000 (0:00:03.227) 0:03:30.162 ****** 2026-01-17 00:56:41.153050 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-01-17 00:56:41.153105 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-01-17 00:56:41.153117 | orchestrator | skipping: [testbed-node-0] 2026-01-17 00:56:41.153133 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-01-17 00:56:41.153152 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-01-17 00:56:41.153162 | orchestrator | skipping: [testbed-node-1] 2026-01-17 00:56:41.153185 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-01-17 00:56:41.153196 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-01-17 00:56:41.153206 | orchestrator | skipping: [testbed-node-2] 2026-01-17 00:56:41.153216 | orchestrator | 2026-01-17 00:56:41.153226 | orchestrator | TASK [haproxy-config : Add configuration for mariadb when using single external frontend] *** 2026-01-17 00:56:41.153236 | orchestrator | Saturday 17 January 2026 00:53:29 +0000 (0:00:02.162) 0:03:32.325 ****** 2026-01-17 00:56:41.153247 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-01-17 00:56:41.153265 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-01-17 00:56:41.153276 | orchestrator | skipping: [testbed-node-0] 2026-01-17 00:56:41.153892 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-01-17 00:56:41.153933 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-01-17 00:56:41.153962 | orchestrator | skipping: [testbed-node-1] 2026-01-17 00:56:41.153974 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-01-17 00:56:41.154144 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-01-17 00:56:41.154171 | orchestrator | skipping: [testbed-node-2] 2026-01-17 00:56:41.154183 | orchestrator | 2026-01-17 00:56:41.154193 | orchestrator | TASK [haproxy-config : Configuring firewall for mariadb] *********************** 2026-01-17 00:56:41.154203 | orchestrator | Saturday 17 January 2026 00:53:31 +0000 (0:00:02.307) 0:03:34.633 ****** 2026-01-17 00:56:41.154214 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-01-17 00:56:41.154226 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-01-17 00:56:41.154278 | orchestrator | skipping: [testbed-node-0] 2026-01-17 00:56:41.154288 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-01-17 00:56:41.154298 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-01-17 00:56:41.154308 | orchestrator | skipping: [testbed-node-1] 2026-01-17 00:56:41.154319 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-01-17 00:56:41.154404 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-01-17 00:56:41.154419 | orchestrator | skipping: [testbed-node-2] 2026-01-17 00:56:41.154429 | orchestrator | 2026-01-17 00:56:41.154440 | orchestrator | TASK [proxysql-config : Copying over mariadb ProxySQL users config] ************ 2026-01-17 00:56:41.154451 | orchestrator | Saturday 17 January 2026 00:53:34 +0000 (0:00:03.121) 0:03:37.755 ****** 2026-01-17 00:56:41.154478 | orchestrator | changed: [testbed-node-0] 2026-01-17 00:56:41.154495 | orchestrator | changed: [testbed-node-1] 2026-01-17 00:56:41.154507 | orchestrator | changed: [testbed-node-2] 2026-01-17 00:56:41.154516 | orchestrator | 2026-01-17 00:56:41.154525 | orchestrator | TASK [proxysql-config : Copying over mariadb ProxySQL rules config] ************ 2026-01-17 00:56:41.154536 | orchestrator | Saturday 17 January 2026 00:53:36 +0000 (0:00:02.083) 0:03:39.838 ****** 2026-01-17 00:56:41.154546 | orchestrator | skipping: [testbed-node-0] 2026-01-17 00:56:41.154557 | orchestrator | skipping: [testbed-node-1] 2026-01-17 00:56:41.154566 | orchestrator | skipping: [testbed-node-2] 2026-01-17 00:56:41.154575 | orchestrator | 2026-01-17 00:56:41.154594 | orchestrator | TASK [include_role : masakari] ************************************************* 2026-01-17 00:56:41.154606 | orchestrator | Saturday 17 January 2026 00:53:37 +0000 (0:00:01.458) 0:03:41.297 ****** 2026-01-17 00:56:41.154618 | orchestrator | skipping: [testbed-node-0] 2026-01-17 00:56:41.154628 | orchestrator | skipping: [testbed-node-1] 2026-01-17 00:56:41.154638 | orchestrator | skipping: [testbed-node-2] 2026-01-17 00:56:41.154647 | orchestrator | 2026-01-17 00:56:41.154658 | orchestrator | TASK [include_role : memcached] ************************************************ 2026-01-17 00:56:41.154668 | orchestrator | Saturday 17 January 2026 00:53:38 +0000 (0:00:00.344) 0:03:41.641 ****** 2026-01-17 00:56:41.154678 | orchestrator | included: memcached for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-17 00:56:41.154690 | orchestrator | 2026-01-17 00:56:41.154700 | orchestrator | TASK [haproxy-config : Copying over memcached haproxy config] ****************** 2026-01-17 00:56:41.154710 | orchestrator | Saturday 17 January 2026 00:53:39 +0000 (0:00:01.413) 0:03:43.054 ****** 2026-01-17 00:56:41.154721 | orchestrator | changed: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-01-17 00:56:41.154735 | orchestrator | changed: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-01-17 00:56:41.154758 | orchestrator | changed: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-01-17 00:56:41.154784 | orchestrator | 2026-01-17 00:56:41.154812 | orchestrator | TASK [haproxy-config : Add configuration for memcached when using single external frontend] *** 2026-01-17 00:56:41.154838 | orchestrator | Saturday 17 January 2026 00:53:41 +0000 (0:00:01.728) 0:03:44.783 ****** 2026-01-17 00:56:41.154999 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-01-17 00:56:41.155124 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-01-17 00:56:41.155139 | orchestrator | skipping: [testbed-node-0] 2026-01-17 00:56:41.155149 | orchestrator | skipping: [testbed-node-1] 2026-01-17 00:56:41.155159 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-01-17 00:56:41.155169 | orchestrator | skipping: [testbed-node-2] 2026-01-17 00:56:41.155179 | orchestrator | 2026-01-17 00:56:41.155188 | orchestrator | TASK [haproxy-config : Configuring firewall for memcached] ********************* 2026-01-17 00:56:41.155198 | orchestrator | Saturday 17 January 2026 00:53:41 +0000 (0:00:00.404) 0:03:45.187 ****** 2026-01-17 00:56:41.155210 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2026-01-17 00:56:41.155221 | orchestrator | skipping: [testbed-node-0] 2026-01-17 00:56:41.155231 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2026-01-17 00:56:41.155242 | orchestrator | skipping: [testbed-node-1] 2026-01-17 00:56:41.155252 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2026-01-17 00:56:41.155264 | orchestrator | skipping: [testbed-node-2] 2026-01-17 00:56:41.155275 | orchestrator | 2026-01-17 00:56:41.155284 | orchestrator | TASK [proxysql-config : Copying over memcached ProxySQL users config] ********** 2026-01-17 00:56:41.155294 | orchestrator | Saturday 17 January 2026 00:53:42 +0000 (0:00:00.869) 0:03:46.057 ****** 2026-01-17 00:56:41.155303 | orchestrator | skipping: [testbed-node-0] 2026-01-17 00:56:41.155313 | orchestrator | skipping: [testbed-node-1] 2026-01-17 00:56:41.155322 | orchestrator | skipping: [testbed-node-2] 2026-01-17 00:56:41.155332 | orchestrator | 2026-01-17 00:56:41.155341 | orchestrator | TASK [proxysql-config : Copying over memcached ProxySQL rules config] ********** 2026-01-17 00:56:41.155351 | orchestrator | Saturday 17 January 2026 00:53:43 +0000 (0:00:00.490) 0:03:46.547 ****** 2026-01-17 00:56:41.155393 | orchestrator | skipping: [testbed-node-0] 2026-01-17 00:56:41.155404 | orchestrator | skipping: [testbed-node-1] 2026-01-17 00:56:41.155413 | orchestrator | skipping: [testbed-node-2] 2026-01-17 00:56:41.155423 | orchestrator | 2026-01-17 00:56:41.155433 | orchestrator | TASK [include_role : mistral] ************************************************** 2026-01-17 00:56:41.155444 | orchestrator | Saturday 17 January 2026 00:53:44 +0000 (0:00:01.364) 0:03:47.912 ****** 2026-01-17 00:56:41.155454 | orchestrator | skipping: [testbed-node-0] 2026-01-17 00:56:41.155464 | orchestrator | skipping: [testbed-node-1] 2026-01-17 00:56:41.155473 | orchestrator | skipping: [testbed-node-2] 2026-01-17 00:56:41.155483 | orchestrator | 2026-01-17 00:56:41.155493 | orchestrator | TASK [include_role : neutron] ************************************************** 2026-01-17 00:56:41.155588 | orchestrator | Saturday 17 January 2026 00:53:44 +0000 (0:00:00.337) 0:03:48.249 ****** 2026-01-17 00:56:41.155601 | orchestrator | included: neutron for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-17 00:56:41.155612 | orchestrator | 2026-01-17 00:56:41.155621 | orchestrator | TASK [haproxy-config : Copying over neutron haproxy config] ******************** 2026-01-17 00:56:41.155630 | orchestrator | Saturday 17 January 2026 00:53:46 +0000 (0:00:01.584) 0:03:49.833 ****** 2026-01-17 00:56:41.155649 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-01-17 00:56:41.155660 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-01-17 00:56:41.155669 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-01-17 00:56:41.155681 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-01-17 00:56:41.155767 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-01-17 00:56:41.155790 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-01-17 00:56:41.155801 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-01-17 00:56:41.155813 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-01-17 00:56:41.155823 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-01-17 00:56:41.155833 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-01-17 00:56:41.155850 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-01-17 00:56:41.155926 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-01-17 00:56:41.155948 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-01-17 00:56:41.155959 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-01-17 00:56:41.155970 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-01-17 00:56:41.155982 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-01-17 00:56:41.156001 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-01-17 00:56:41.156145 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-01-17 00:56:41.156162 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-01-17 00:56:41.156174 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-01-17 00:56:41.156185 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-01-17 00:56:41.156230 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-01-17 00:56:41.156299 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-01-17 00:56:41.156316 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-01-17 00:56:41.156327 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-01-17 00:56:41.156337 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-01-17 00:56:41.156347 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-01-17 00:56:41.156362 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-01-17 00:56:41.156371 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-01-17 00:56:41.156445 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-01-17 00:56:41.156457 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-01-17 00:56:41.156467 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-01-17 00:56:41.156476 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-01-17 00:56:41.156491 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-01-17 00:56:41.156501 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-01-17 00:56:41.156565 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-01-17 00:56:41.156577 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-01-17 00:56:41.156587 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-01-17 00:56:41.156596 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-01-17 00:56:41.156611 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-01-17 00:56:41.156620 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-01-17 00:56:41.156677 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-01-17 00:56:41.156691 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-01-17 00:56:41.156701 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-01-17 00:56:41.156710 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-01-17 00:56:41.156726 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-01-17 00:56:41.156736 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-01-17 00:56:41.156811 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-01-17 00:56:41.156824 | orchestrator | 2026-01-17 00:56:41.156834 | orchestrator | TASK [haproxy-config : Add configuration for neutron when using single external frontend] *** 2026-01-17 00:56:41.156843 | orchestrator | Saturday 17 January 2026 00:53:50 +0000 (0:00:04.349) 0:03:54.183 ****** 2026-01-17 00:56:41.156857 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-01-17 00:56:41.156866 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-01-17 00:56:41.156881 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-01-17 00:56:41.156891 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-01-17 00:56:41.156951 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-01-17 00:56:41.156966 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-01-17 00:56:41.156991 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-01-17 00:56:41.157008 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-01-17 00:56:41.157018 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-01-17 00:56:41.157027 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-01-17 00:56:41.157126 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-01-17 00:56:41.157147 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-01-17 00:56:41.157157 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-01-17 00:56:41.157175 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-01-17 00:56:41.157184 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-01-17 00:56:41.157192 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-01-17 00:56:41.157202 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-01-17 00:56:41.157298 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-01-17 00:56:41.157312 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-01-17 00:56:41.157329 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-01-17 00:56:41.157338 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-01-17 00:56:41.157347 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-01-17 00:56:41.157357 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-01-17 00:56:41.157425 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-01-17 00:56:41.157437 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-01-17 00:56:41.157453 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-01-17 00:56:41.157463 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-01-17 00:56:41.157473 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-01-17 00:56:41.157482 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-01-17 00:56:41.157545 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-01-17 00:56:41.157557 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-01-17 00:56:41.157590 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-01-17 00:56:41.157685 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-01-17 00:56:41.157709 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-01-17 00:56:41.157720 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-01-17 00:56:41.157798 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-01-17 00:56:41.157811 | orchestrator | skipping: [testbed-node-0] 2026-01-17 00:56:41.157822 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-01-17 00:56:41.157854 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-01-17 00:56:41.157864 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-01-17 00:56:41.157874 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-01-17 00:56:41.157884 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-01-17 00:56:41.157893 | orchestrator | skipping: [testbed-node-1] 2026-01-17 00:56:41.157929 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-01-17 00:56:41.157944 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-01-17 00:56:41.157959 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-01-17 00:56:41.157968 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-01-17 00:56:41.157977 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-01-17 00:56:41.157986 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-01-17 00:56:41.158093 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-01-17 00:56:41.158114 | orchestrator | skipping: [testbed-node-2] 2026-01-17 00:56:41.158125 | orchestrator | 2026-01-17 00:56:41.158135 | orchestrator | TASK [haproxy-config : Configuring firewall for neutron] *********************** 2026-01-17 00:56:41.158149 | orchestrator | Saturday 17 January 2026 00:53:52 +0000 (0:00:01.472) 0:03:55.655 ****** 2026-01-17 00:56:41.158159 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2026-01-17 00:56:41.158169 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2026-01-17 00:56:41.158178 | orchestrator | skipping: [testbed-node-0] 2026-01-17 00:56:41.158187 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2026-01-17 00:56:41.158196 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2026-01-17 00:56:41.158205 | orchestrator | skipping: [testbed-node-1] 2026-01-17 00:56:41.158214 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2026-01-17 00:56:41.158223 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2026-01-17 00:56:41.158231 | orchestrator | skipping: [testbed-node-2] 2026-01-17 00:56:41.158240 | orchestrator | 2026-01-17 00:56:41.158248 | orchestrator | TASK [proxysql-config : Copying over neutron ProxySQL users config] ************ 2026-01-17 00:56:41.158257 | orchestrator | Saturday 17 January 2026 00:53:54 +0000 (0:00:02.146) 0:03:57.802 ****** 2026-01-17 00:56:41.158265 | orchestrator | changed: [testbed-node-0] 2026-01-17 00:56:41.158273 | orchestrator | changed: [testbed-node-1] 2026-01-17 00:56:41.158282 | orchestrator | changed: [testbed-node-2] 2026-01-17 00:56:41.158292 | orchestrator | 2026-01-17 00:56:41.158300 | orchestrator | TASK [proxysql-config : Copying over neutron ProxySQL rules config] ************ 2026-01-17 00:56:41.158309 | orchestrator | Saturday 17 January 2026 00:53:55 +0000 (0:00:01.372) 0:03:59.174 ****** 2026-01-17 00:56:41.158318 | orchestrator | changed: [testbed-node-0] 2026-01-17 00:56:41.158327 | orchestrator | changed: [testbed-node-1] 2026-01-17 00:56:41.158336 | orchestrator | changed: [testbed-node-2] 2026-01-17 00:56:41.158343 | orchestrator | 2026-01-17 00:56:41.158352 | orchestrator | TASK [include_role : placement] ************************************************ 2026-01-17 00:56:41.158361 | orchestrator | Saturday 17 January 2026 00:53:58 +0000 (0:00:02.398) 0:04:01.572 ****** 2026-01-17 00:56:41.158369 | orchestrator | included: placement for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-17 00:56:41.158378 | orchestrator | 2026-01-17 00:56:41.158387 | orchestrator | TASK [haproxy-config : Copying over placement haproxy config] ****************** 2026-01-17 00:56:41.158395 | orchestrator | Saturday 17 January 2026 00:53:59 +0000 (0:00:01.239) 0:04:02.812 ****** 2026-01-17 00:56:41.158405 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-01-17 00:56:41.158457 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-01-17 00:56:41.158472 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-01-17 00:56:41.158482 | orchestrator | 2026-01-17 00:56:41.158492 | orchestrator | TASK [haproxy-config : Add configuration for placement when using single external frontend] *** 2026-01-17 00:56:41.158501 | orchestrator | Saturday 17 January 2026 00:54:03 +0000 (0:00:03.747) 0:04:06.560 ****** 2026-01-17 00:56:41.158510 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-01-17 00:56:41.158520 | orchestrator | skipping: [testbed-node-0] 2026-01-17 00:56:41.158528 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-01-17 00:56:41.158542 | orchestrator | skipping: [testbed-node-1] 2026-01-17 00:56:41.158575 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-01-17 00:56:41.158585 | orchestrator | skipping: [testbed-node-2] 2026-01-17 00:56:41.158593 | orchestrator | 2026-01-17 00:56:41.158601 | orchestrator | TASK [haproxy-config : Configuring firewall for placement] ********************* 2026-01-17 00:56:41.158610 | orchestrator | Saturday 17 January 2026 00:54:03 +0000 (0:00:00.530) 0:04:07.091 ****** 2026-01-17 00:56:41.158622 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-01-17 00:56:41.158632 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-01-17 00:56:41.158642 | orchestrator | skipping: [testbed-node-0] 2026-01-17 00:56:41.158650 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-01-17 00:56:41.158660 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-01-17 00:56:41.158669 | orchestrator | skipping: [testbed-node-1] 2026-01-17 00:56:41.158679 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-01-17 00:56:41.158689 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-01-17 00:56:41.158698 | orchestrator | skipping: [testbed-node-2] 2026-01-17 00:56:41.158707 | orchestrator | 2026-01-17 00:56:41.158716 | orchestrator | TASK [proxysql-config : Copying over placement ProxySQL users config] ********** 2026-01-17 00:56:41.158725 | orchestrator | Saturday 17 January 2026 00:54:04 +0000 (0:00:00.774) 0:04:07.866 ****** 2026-01-17 00:56:41.158735 | orchestrator | changed: [testbed-node-0] 2026-01-17 00:56:41.158744 | orchestrator | changed: [testbed-node-1] 2026-01-17 00:56:41.158752 | orchestrator | changed: [testbed-node-2] 2026-01-17 00:56:41.158762 | orchestrator | 2026-01-17 00:56:41.158770 | orchestrator | TASK [proxysql-config : Copying over placement ProxySQL rules config] ********** 2026-01-17 00:56:41.158780 | orchestrator | Saturday 17 January 2026 00:54:06 +0000 (0:00:01.494) 0:04:09.360 ****** 2026-01-17 00:56:41.158788 | orchestrator | changed: [testbed-node-0] 2026-01-17 00:56:41.158797 | orchestrator | changed: [testbed-node-1] 2026-01-17 00:56:41.158806 | orchestrator | changed: [testbed-node-2] 2026-01-17 00:56:41.158824 | orchestrator | 2026-01-17 00:56:41.158833 | orchestrator | TASK [include_role : nova] ***************************************************** 2026-01-17 00:56:41.158842 | orchestrator | Saturday 17 January 2026 00:54:08 +0000 (0:00:02.280) 0:04:11.641 ****** 2026-01-17 00:56:41.158851 | orchestrator | included: nova for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-17 00:56:41.158859 | orchestrator | 2026-01-17 00:56:41.158866 | orchestrator | TASK [haproxy-config : Copying over nova haproxy config] *********************** 2026-01-17 00:56:41.158874 | orchestrator | Saturday 17 January 2026 00:54:09 +0000 (0:00:01.611) 0:04:13.252 ****** 2026-01-17 00:56:41.158884 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-01-17 00:56:41.158931 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-01-17 00:56:41.158943 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-01-17 00:56:41.158953 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-01-17 00:56:41.158969 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-01-17 00:56:41.158978 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-01-17 00:56:41.159022 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-01-17 00:56:41.159033 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-01-17 00:56:41.159042 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-01-17 00:56:41.159051 | orchestrator | 2026-01-17 00:56:41.159087 | orchestrator | TASK [haproxy-config : Add configuration for nova when using single external frontend] *** 2026-01-17 00:56:41.159104 | orchestrator | Saturday 17 January 2026 00:54:14 +0000 (0:00:04.423) 0:04:17.676 ****** 2026-01-17 00:56:41.159114 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-01-17 00:56:41.159123 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-01-17 00:56:41.159161 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-01-17 00:56:41.159172 | orchestrator | skipping: [testbed-node-0] 2026-01-17 00:56:41.159185 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-01-17 00:56:41.159195 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-01-17 00:56:41.159211 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-01-17 00:56:41.159221 | orchestrator | skipping: [testbed-node-1] 2026-01-17 00:56:41.159230 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-01-17 00:56:41.159272 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-01-17 00:56:41.159283 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-01-17 00:56:41.159292 | orchestrator | skipping: [testbed-node-2] 2026-01-17 00:56:41.159302 | orchestrator | 2026-01-17 00:56:41.159310 | orchestrator | TASK [haproxy-config : Configuring firewall for nova] ************************** 2026-01-17 00:56:41.159318 | orchestrator | Saturday 17 January 2026 00:54:15 +0000 (0:00:00.996) 0:04:18.672 ****** 2026-01-17 00:56:41.159334 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-01-17 00:56:41.159345 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-01-17 00:56:41.159354 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-01-17 00:56:41.159364 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-01-17 00:56:41.159373 | orchestrator | skipping: [testbed-node-0] 2026-01-17 00:56:41.159382 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-01-17 00:56:41.159391 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-01-17 00:56:41.159400 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-01-17 00:56:41.159409 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-01-17 00:56:41.159418 | orchestrator | skipping: [testbed-node-1] 2026-01-17 00:56:41.159438 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-01-17 00:56:41.159447 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-01-17 00:56:41.159456 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-01-17 00:56:41.159465 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-01-17 00:56:41.159474 | orchestrator | skipping: [testbed-node-2] 2026-01-17 00:56:41.159482 | orchestrator | 2026-01-17 00:56:41.159524 | orchestrator | TASK [proxysql-config : Copying over nova ProxySQL users config] *************** 2026-01-17 00:56:41.159535 | orchestrator | Saturday 17 January 2026 00:54:16 +0000 (0:00:01.279) 0:04:19.951 ****** 2026-01-17 00:56:41.159543 | orchestrator | changed: [testbed-node-0] 2026-01-17 00:56:41.159551 | orchestrator | changed: [testbed-node-1] 2026-01-17 00:56:41.159559 | orchestrator | changed: [testbed-node-2] 2026-01-17 00:56:41.159567 | orchestrator | 2026-01-17 00:56:41.159575 | orchestrator | TASK [proxysql-config : Copying over nova ProxySQL rules config] *************** 2026-01-17 00:56:41.159589 | orchestrator | Saturday 17 January 2026 00:54:18 +0000 (0:00:01.491) 0:04:21.443 ****** 2026-01-17 00:56:41.159598 | orchestrator | changed: [testbed-node-0] 2026-01-17 00:56:41.159608 | orchestrator | changed: [testbed-node-1] 2026-01-17 00:56:41.159617 | orchestrator | changed: [testbed-node-2] 2026-01-17 00:56:41.159626 | orchestrator | 2026-01-17 00:56:41.159642 | orchestrator | TASK [include_role : nova-cell] ************************************************ 2026-01-17 00:56:41.159652 | orchestrator | Saturday 17 January 2026 00:54:20 +0000 (0:00:02.282) 0:04:23.726 ****** 2026-01-17 00:56:41.159660 | orchestrator | included: nova-cell for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-17 00:56:41.159668 | orchestrator | 2026-01-17 00:56:41.159676 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-novncproxy] ****************** 2026-01-17 00:56:41.159684 | orchestrator | Saturday 17 January 2026 00:54:22 +0000 (0:00:01.740) 0:04:25.466 ****** 2026-01-17 00:56:41.159693 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-novncproxy) 2026-01-17 00:56:41.159703 | orchestrator | 2026-01-17 00:56:41.159710 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-novncproxy haproxy config] *** 2026-01-17 00:56:41.159718 | orchestrator | Saturday 17 January 2026 00:54:23 +0000 (0:00:00.859) 0:04:26.325 ****** 2026-01-17 00:56:41.159728 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2026-01-17 00:56:41.159738 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2026-01-17 00:56:41.159747 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2026-01-17 00:56:41.159756 | orchestrator | 2026-01-17 00:56:41.159765 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-novncproxy when using single external frontend] *** 2026-01-17 00:56:41.159774 | orchestrator | Saturday 17 January 2026 00:54:27 +0000 (0:00:04.676) 0:04:31.002 ****** 2026-01-17 00:56:41.159783 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-01-17 00:56:41.159791 | orchestrator | skipping: [testbed-node-0] 2026-01-17 00:56:41.159800 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-01-17 00:56:41.159810 | orchestrator | skipping: [testbed-node-1] 2026-01-17 00:56:41.159867 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-01-17 00:56:41.159879 | orchestrator | skipping: [testbed-node-2] 2026-01-17 00:56:41.159887 | orchestrator | 2026-01-17 00:56:41.159896 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-novncproxy] ***** 2026-01-17 00:56:41.159903 | orchestrator | Saturday 17 January 2026 00:54:29 +0000 (0:00:01.494) 0:04:32.496 ****** 2026-01-17 00:56:41.159912 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-01-17 00:56:41.159922 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-01-17 00:56:41.159932 | orchestrator | skipping: [testbed-node-0] 2026-01-17 00:56:41.159941 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-01-17 00:56:41.159950 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-01-17 00:56:41.159959 | orchestrator | skipping: [testbed-node-1] 2026-01-17 00:56:41.159967 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-01-17 00:56:41.159976 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-01-17 00:56:41.159985 | orchestrator | skipping: [testbed-node-2] 2026-01-17 00:56:41.159993 | orchestrator | 2026-01-17 00:56:41.160002 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2026-01-17 00:56:41.160011 | orchestrator | Saturday 17 January 2026 00:54:30 +0000 (0:00:01.694) 0:04:34.190 ****** 2026-01-17 00:56:41.160019 | orchestrator | changed: [testbed-node-0] 2026-01-17 00:56:41.160029 | orchestrator | changed: [testbed-node-1] 2026-01-17 00:56:41.160037 | orchestrator | changed: [testbed-node-2] 2026-01-17 00:56:41.160046 | orchestrator | 2026-01-17 00:56:41.160116 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2026-01-17 00:56:41.160128 | orchestrator | Saturday 17 January 2026 00:54:33 +0000 (0:00:02.555) 0:04:36.746 ****** 2026-01-17 00:56:41.160137 | orchestrator | changed: [testbed-node-0] 2026-01-17 00:56:41.160145 | orchestrator | changed: [testbed-node-1] 2026-01-17 00:56:41.160153 | orchestrator | changed: [testbed-node-2] 2026-01-17 00:56:41.160161 | orchestrator | 2026-01-17 00:56:41.160169 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-spicehtml5proxy] ************* 2026-01-17 00:56:41.160177 | orchestrator | Saturday 17 January 2026 00:54:36 +0000 (0:00:03.117) 0:04:39.864 ****** 2026-01-17 00:56:41.160188 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-spicehtml5proxy) 2026-01-17 00:56:41.160205 | orchestrator | 2026-01-17 00:56:41.160215 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-spicehtml5proxy haproxy config] *** 2026-01-17 00:56:41.160222 | orchestrator | Saturday 17 January 2026 00:54:37 +0000 (0:00:01.451) 0:04:41.316 ****** 2026-01-17 00:56:41.160228 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-01-17 00:56:41.160235 | orchestrator | skipping: [testbed-node-0] 2026-01-17 00:56:41.160272 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-01-17 00:56:41.160278 | orchestrator | skipping: [testbed-node-1] 2026-01-17 00:56:41.160291 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-01-17 00:56:41.160297 | orchestrator | skipping: [testbed-node-2] 2026-01-17 00:56:41.160302 | orchestrator | 2026-01-17 00:56:41.160308 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-spicehtml5proxy when using single external frontend] *** 2026-01-17 00:56:41.160313 | orchestrator | Saturday 17 January 2026 00:54:39 +0000 (0:00:01.288) 0:04:42.604 ****** 2026-01-17 00:56:41.160318 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-01-17 00:56:41.160324 | orchestrator | skipping: [testbed-node-0] 2026-01-17 00:56:41.160329 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-01-17 00:56:41.160335 | orchestrator | skipping: [testbed-node-1] 2026-01-17 00:56:41.160340 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-01-17 00:56:41.160350 | orchestrator | skipping: [testbed-node-2] 2026-01-17 00:56:41.160355 | orchestrator | 2026-01-17 00:56:41.160361 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-spicehtml5proxy] *** 2026-01-17 00:56:41.160366 | orchestrator | Saturday 17 January 2026 00:54:40 +0000 (0:00:01.374) 0:04:43.978 ****** 2026-01-17 00:56:41.160371 | orchestrator | skipping: [testbed-node-0] 2026-01-17 00:56:41.160377 | orchestrator | skipping: [testbed-node-1] 2026-01-17 00:56:41.160382 | orchestrator | skipping: [testbed-node-2] 2026-01-17 00:56:41.160387 | orchestrator | 2026-01-17 00:56:41.160393 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2026-01-17 00:56:41.160398 | orchestrator | Saturday 17 January 2026 00:54:42 +0000 (0:00:01.946) 0:04:45.924 ****** 2026-01-17 00:56:41.160403 | orchestrator | ok: [testbed-node-1] 2026-01-17 00:56:41.160409 | orchestrator | ok: [testbed-node-0] 2026-01-17 00:56:41.160414 | orchestrator | ok: [testbed-node-2] 2026-01-17 00:56:41.160420 | orchestrator | 2026-01-17 00:56:41.160425 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2026-01-17 00:56:41.160430 | orchestrator | Saturday 17 January 2026 00:54:45 +0000 (0:00:02.546) 0:04:48.470 ****** 2026-01-17 00:56:41.160436 | orchestrator | ok: [testbed-node-0] 2026-01-17 00:56:41.160441 | orchestrator | ok: [testbed-node-1] 2026-01-17 00:56:41.160446 | orchestrator | ok: [testbed-node-2] 2026-01-17 00:56:41.160451 | orchestrator | 2026-01-17 00:56:41.160456 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-serialproxy] ***************** 2026-01-17 00:56:41.160462 | orchestrator | Saturday 17 January 2026 00:54:48 +0000 (0:00:03.052) 0:04:51.523 ****** 2026-01-17 00:56:41.160467 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-serialproxy) 2026-01-17 00:56:41.160473 | orchestrator | 2026-01-17 00:56:41.160478 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-serialproxy haproxy config] *** 2026-01-17 00:56:41.160483 | orchestrator | Saturday 17 January 2026 00:54:49 +0000 (0:00:00.869) 0:04:52.393 ****** 2026-01-17 00:56:41.160509 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-01-17 00:56:41.160516 | orchestrator | skipping: [testbed-node-1] 2026-01-17 00:56:41.160522 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-01-17 00:56:41.160527 | orchestrator | skipping: [testbed-node-0] 2026-01-17 00:56:41.160532 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-01-17 00:56:41.160537 | orchestrator | skipping: [testbed-node-2] 2026-01-17 00:56:41.160545 | orchestrator | 2026-01-17 00:56:41.160550 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-serialproxy when using single external frontend] *** 2026-01-17 00:56:41.160558 | orchestrator | Saturday 17 January 2026 00:54:50 +0000 (0:00:01.327) 0:04:53.721 ****** 2026-01-17 00:56:41.160566 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-01-17 00:56:41.160574 | orchestrator | skipping: [testbed-node-0] 2026-01-17 00:56:41.160581 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-01-17 00:56:41.160589 | orchestrator | skipping: [testbed-node-1] 2026-01-17 00:56:41.160597 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-01-17 00:56:41.160605 | orchestrator | skipping: [testbed-node-2] 2026-01-17 00:56:41.160612 | orchestrator | 2026-01-17 00:56:41.160617 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-serialproxy] **** 2026-01-17 00:56:41.160622 | orchestrator | Saturday 17 January 2026 00:54:51 +0000 (0:00:01.339) 0:04:55.060 ****** 2026-01-17 00:56:41.160627 | orchestrator | skipping: [testbed-node-0] 2026-01-17 00:56:41.160632 | orchestrator | skipping: [testbed-node-1] 2026-01-17 00:56:41.160636 | orchestrator | skipping: [testbed-node-2] 2026-01-17 00:56:41.160641 | orchestrator | 2026-01-17 00:56:41.160646 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2026-01-17 00:56:41.160650 | orchestrator | Saturday 17 January 2026 00:54:53 +0000 (0:00:01.589) 0:04:56.649 ****** 2026-01-17 00:56:41.160655 | orchestrator | ok: [testbed-node-0] 2026-01-17 00:56:41.160678 | orchestrator | ok: [testbed-node-1] 2026-01-17 00:56:41.160685 | orchestrator | ok: [testbed-node-2] 2026-01-17 00:56:41.160692 | orchestrator | 2026-01-17 00:56:41.160701 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2026-01-17 00:56:41.160709 | orchestrator | Saturday 17 January 2026 00:54:55 +0000 (0:00:02.496) 0:04:59.146 ****** 2026-01-17 00:56:41.160717 | orchestrator | ok: [testbed-node-0] 2026-01-17 00:56:41.160722 | orchestrator | ok: [testbed-node-1] 2026-01-17 00:56:41.160726 | orchestrator | ok: [testbed-node-2] 2026-01-17 00:56:41.160731 | orchestrator | 2026-01-17 00:56:41.160754 | orchestrator | TASK [include_role : octavia] ************************************************** 2026-01-17 00:56:41.160759 | orchestrator | Saturday 17 January 2026 00:54:59 +0000 (0:00:03.523) 0:05:02.669 ****** 2026-01-17 00:56:41.160764 | orchestrator | included: octavia for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-17 00:56:41.160769 | orchestrator | 2026-01-17 00:56:41.160773 | orchestrator | TASK [haproxy-config : Copying over octavia haproxy config] ******************** 2026-01-17 00:56:41.160778 | orchestrator | Saturday 17 January 2026 00:55:01 +0000 (0:00:01.651) 0:05:04.321 ****** 2026-01-17 00:56:41.160790 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-01-17 00:56:41.160800 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-01-17 00:56:41.160809 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-01-17 00:56:41.160816 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-01-17 00:56:41.160822 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-01-17 00:56:41.160849 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-01-17 00:56:41.160860 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-01-17 00:56:41.160865 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-01-17 00:56:41.160870 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-01-17 00:56:41.160877 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-01-17 00:56:41.160886 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-01-17 00:56:41.160921 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-01-17 00:56:41.160936 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-01-17 00:56:41.160944 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-01-17 00:56:41.160952 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-01-17 00:56:41.160959 | orchestrator | 2026-01-17 00:56:41.160966 | orchestrator | TASK [haproxy-config : Add configuration for octavia when using single external frontend] *** 2026-01-17 00:56:41.160973 | orchestrator | Saturday 17 January 2026 00:55:04 +0000 (0:00:03.454) 0:05:07.776 ****** 2026-01-17 00:56:41.160980 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-01-17 00:56:41.160988 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-01-17 00:56:41.161024 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-01-17 00:56:41.161039 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-01-17 00:56:41.161048 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-01-17 00:56:41.161075 | orchestrator | skipping: [testbed-node-0] 2026-01-17 00:56:41.161083 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-01-17 00:56:41.161092 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-01-17 00:56:41.161099 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-01-17 00:56:41.161130 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-01-17 00:56:41.161149 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-01-17 00:56:41.161156 | orchestrator | skipping: [testbed-node-1] 2026-01-17 00:56:41.161161 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-01-17 00:56:41.161166 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-01-17 00:56:41.161171 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-01-17 00:56:41.161176 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-01-17 00:56:41.161198 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-01-17 00:56:41.161207 | orchestrator | skipping: [testbed-node-2] 2026-01-17 00:56:41.161212 | orchestrator | 2026-01-17 00:56:41.161217 | orchestrator | TASK [haproxy-config : Configuring firewall for octavia] *********************** 2026-01-17 00:56:41.161222 | orchestrator | Saturday 17 January 2026 00:55:05 +0000 (0:00:00.730) 0:05:08.506 ****** 2026-01-17 00:56:41.161229 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-01-17 00:56:41.161235 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-01-17 00:56:41.161240 | orchestrator | skipping: [testbed-node-0] 2026-01-17 00:56:41.161245 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-01-17 00:56:41.161250 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-01-17 00:56:41.161255 | orchestrator | skipping: [testbed-node-1] 2026-01-17 00:56:41.161259 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-01-17 00:56:41.161264 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-01-17 00:56:41.161269 | orchestrator | skipping: [testbed-node-2] 2026-01-17 00:56:41.161274 | orchestrator | 2026-01-17 00:56:41.161279 | orchestrator | TASK [proxysql-config : Copying over octavia ProxySQL users config] ************ 2026-01-17 00:56:41.161283 | orchestrator | Saturday 17 January 2026 00:55:06 +0000 (0:00:01.537) 0:05:10.044 ****** 2026-01-17 00:56:41.161288 | orchestrator | changed: [testbed-node-0] 2026-01-17 00:56:41.161293 | orchestrator | changed: [testbed-node-1] 2026-01-17 00:56:41.161297 | orchestrator | changed: [testbed-node-2] 2026-01-17 00:56:41.161302 | orchestrator | 2026-01-17 00:56:41.161307 | orchestrator | TASK [proxysql-config : Copying over octavia ProxySQL rules config] ************ 2026-01-17 00:56:41.161311 | orchestrator | Saturday 17 January 2026 00:55:08 +0000 (0:00:01.504) 0:05:11.548 ****** 2026-01-17 00:56:41.161316 | orchestrator | changed: [testbed-node-0] 2026-01-17 00:56:41.161321 | orchestrator | changed: [testbed-node-1] 2026-01-17 00:56:41.161325 | orchestrator | changed: [testbed-node-2] 2026-01-17 00:56:41.161332 | orchestrator | 2026-01-17 00:56:41.161340 | orchestrator | TASK [include_role : opensearch] *********************************************** 2026-01-17 00:56:41.161348 | orchestrator | Saturday 17 January 2026 00:55:10 +0000 (0:00:02.246) 0:05:13.795 ****** 2026-01-17 00:56:41.161356 | orchestrator | included: opensearch for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-17 00:56:41.161363 | orchestrator | 2026-01-17 00:56:41.161371 | orchestrator | TASK [haproxy-config : Copying over opensearch haproxy config] ***************** 2026-01-17 00:56:41.161379 | orchestrator | Saturday 17 January 2026 00:55:12 +0000 (0:00:01.570) 0:05:15.365 ****** 2026-01-17 00:56:41.161388 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-01-17 00:56:41.161422 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-01-17 00:56:41.161432 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-01-17 00:56:41.161439 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-01-17 00:56:41.161445 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-01-17 00:56:41.161472 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-01-17 00:56:41.161478 | orchestrator | 2026-01-17 00:56:41.161483 | orchestrator | TASK [haproxy-config : Add configuration for opensearch when using single external frontend] *** 2026-01-17 00:56:41.161488 | orchestrator | Saturday 17 January 2026 00:55:17 +0000 (0:00:05.654) 0:05:21.020 ****** 2026-01-17 00:56:41.161498 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-01-17 00:56:41.161503 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-01-17 00:56:41.161511 | orchestrator | skipping: [testbed-node-0] 2026-01-17 00:56:41.161519 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-01-17 00:56:41.161532 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-01-17 00:56:41.161561 | orchestrator | skipping: [testbed-node-1] 2026-01-17 00:56:41.161574 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-01-17 00:56:41.161583 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-01-17 00:56:41.161591 | orchestrator | skipping: [testbed-node-2] 2026-01-17 00:56:41.161599 | orchestrator | 2026-01-17 00:56:41.161608 | orchestrator | TASK [haproxy-config : Configuring firewall for opensearch] ******************** 2026-01-17 00:56:41.161616 | orchestrator | Saturday 17 January 2026 00:55:18 +0000 (0:00:00.718) 0:05:21.738 ****** 2026-01-17 00:56:41.161624 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2026-01-17 00:56:41.161639 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-01-17 00:56:41.161649 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-01-17 00:56:41.161659 | orchestrator | skipping: [testbed-node-0] 2026-01-17 00:56:41.161668 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2026-01-17 00:56:41.161676 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-01-17 00:56:41.161684 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-01-17 00:56:41.161690 | orchestrator | skipping: [testbed-node-1] 2026-01-17 00:56:41.161695 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2026-01-17 00:56:41.161700 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-01-17 00:56:41.161723 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-01-17 00:56:41.161729 | orchestrator | skipping: [testbed-node-2] 2026-01-17 00:56:41.161734 | orchestrator | 2026-01-17 00:56:41.161739 | orchestrator | TASK [proxysql-config : Copying over opensearch ProxySQL users config] ********* 2026-01-17 00:56:41.161747 | orchestrator | Saturday 17 January 2026 00:55:19 +0000 (0:00:00.957) 0:05:22.696 ****** 2026-01-17 00:56:41.161752 | orchestrator | skipping: [testbed-node-0] 2026-01-17 00:56:41.161756 | orchestrator | skipping: [testbed-node-1] 2026-01-17 00:56:41.161761 | orchestrator | skipping: [testbed-node-2] 2026-01-17 00:56:41.161766 | orchestrator | 2026-01-17 00:56:41.161770 | orchestrator | TASK [proxysql-config : Copying over opensearch ProxySQL rules config] ********* 2026-01-17 00:56:41.161775 | orchestrator | Saturday 17 January 2026 00:55:20 +0000 (0:00:00.843) 0:05:23.539 ****** 2026-01-17 00:56:41.161779 | orchestrator | skipping: [testbed-node-0] 2026-01-17 00:56:41.161784 | orchestrator | skipping: [testbed-node-1] 2026-01-17 00:56:41.161789 | orchestrator | skipping: [testbed-node-2] 2026-01-17 00:56:41.161793 | orchestrator | 2026-01-17 00:56:41.161798 | orchestrator | TASK [include_role : prometheus] *********************************************** 2026-01-17 00:56:41.161803 | orchestrator | Saturday 17 January 2026 00:55:21 +0000 (0:00:01.374) 0:05:24.913 ****** 2026-01-17 00:56:41.161807 | orchestrator | included: prometheus for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-17 00:56:41.161812 | orchestrator | 2026-01-17 00:56:41.161817 | orchestrator | TASK [haproxy-config : Copying over prometheus haproxy config] ***************** 2026-01-17 00:56:41.161822 | orchestrator | Saturday 17 January 2026 00:55:23 +0000 (0:00:01.460) 0:05:26.374 ****** 2026-01-17 00:56:41.161827 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-01-17 00:56:41.161836 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-01-17 00:56:41.161842 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-17 00:56:41.161847 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-17 00:56:41.161852 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-01-17 00:56:41.161877 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-01-17 00:56:41.161883 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-01-17 00:56:41.161893 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-17 00:56:41.161902 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-17 00:56:41.161910 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-01-17 00:56:41.161918 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-01-17 00:56:41.161927 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-01-17 00:56:41.161962 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-17 00:56:41.161973 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-17 00:56:41.161980 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-01-17 00:56:41.161994 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-01-17 00:56:41.162003 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-01-17 00:56:41.162011 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-17 00:56:41.162091 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-17 00:56:41.162105 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-01-17 00:56:41.162113 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-01-17 00:56:41.162128 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-01-17 00:56:41.162137 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-17 00:56:41.162144 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-17 00:56:41.162151 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-01-17 00:56:41.162170 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-01-17 00:56:41.162187 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-01-17 00:56:41.162195 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-17 00:56:41.162202 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-17 00:56:41.162210 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-01-17 00:56:41.162218 | orchestrator | 2026-01-17 00:56:41.162226 | orchestrator | TASK [haproxy-config : Add configuration for prometheus when using single external frontend] *** 2026-01-17 00:56:41.162233 | orchestrator | Saturday 17 January 2026 00:55:27 +0000 (0:00:04.727) 0:05:31.102 ****** 2026-01-17 00:56:41.162246 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2026-01-17 00:56:41.162258 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-01-17 00:56:41.162275 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-17 00:56:41.162284 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-17 00:56:41.162291 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-01-17 00:56:41.162299 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2026-01-17 00:56:41.162308 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-01-17 00:56:41.162321 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-17 00:56:41.162334 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-17 00:56:41.162343 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-01-17 00:56:41.162350 | orchestrator | skipping: [testbed-node-0] 2026-01-17 00:56:41.162421 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2026-01-17 00:56:41.162444 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-01-17 00:56:41.162453 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-17 00:56:41.162459 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-17 00:56:41.162470 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-01-17 00:56:41.162483 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2026-01-17 00:56:41.162490 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-01-17 00:56:41.162495 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-17 00:56:41.162500 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-17 00:56:41.162505 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-01-17 00:56:41.162510 | orchestrator | skipping: [testbed-node-1] 2026-01-17 00:56:41.162532 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2026-01-17 00:56:41.162552 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-01-17 00:56:41.162561 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-17 00:56:41.162568 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-17 00:56:41.162576 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-01-17 00:56:41.162584 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2026-01-17 00:56:41.162598 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-01-17 00:56:41.162616 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-17 00:56:41.162624 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-17 00:56:41.162632 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-01-17 00:56:41.162641 | orchestrator | skipping: [testbed-node-2] 2026-01-17 00:56:41.162648 | orchestrator | 2026-01-17 00:56:41.162656 | orchestrator | TASK [haproxy-config : Configuring firewall for prometheus] ******************** 2026-01-17 00:56:41.162664 | orchestrator | Saturday 17 January 2026 00:55:29 +0000 (0:00:01.342) 0:05:32.444 ****** 2026-01-17 00:56:41.162672 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2026-01-17 00:56:41.162681 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2026-01-17 00:56:41.162690 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-01-17 00:56:41.162700 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-01-17 00:56:41.162709 | orchestrator | skipping: [testbed-node-0] 2026-01-17 00:56:41.162717 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2026-01-17 00:56:41.162725 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2026-01-17 00:56:41.162739 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-01-17 00:56:41.162747 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-01-17 00:56:41.162754 | orchestrator | skipping: [testbed-node-1] 2026-01-17 00:56:41.162767 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2026-01-17 00:56:41.162776 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2026-01-17 00:56:41.162789 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-01-17 00:56:41.162798 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-01-17 00:56:41.162805 | orchestrator | skipping: [testbed-node-2] 2026-01-17 00:56:41.162813 | orchestrator | 2026-01-17 00:56:41.162820 | orchestrator | TASK [proxysql-config : Copying over prometheus ProxySQL users config] ********* 2026-01-17 00:56:41.162829 | orchestrator | Saturday 17 January 2026 00:55:30 +0000 (0:00:01.161) 0:05:33.605 ****** 2026-01-17 00:56:41.162836 | orchestrator | skipping: [testbed-node-0] 2026-01-17 00:56:41.162844 | orchestrator | skipping: [testbed-node-1] 2026-01-17 00:56:41.162851 | orchestrator | skipping: [testbed-node-2] 2026-01-17 00:56:41.162859 | orchestrator | 2026-01-17 00:56:41.162867 | orchestrator | TASK [proxysql-config : Copying over prometheus ProxySQL rules config] ********* 2026-01-17 00:56:41.162874 | orchestrator | Saturday 17 January 2026 00:55:30 +0000 (0:00:00.542) 0:05:34.147 ****** 2026-01-17 00:56:41.162882 | orchestrator | skipping: [testbed-node-0] 2026-01-17 00:56:41.162890 | orchestrator | skipping: [testbed-node-1] 2026-01-17 00:56:41.162897 | orchestrator | skipping: [testbed-node-2] 2026-01-17 00:56:41.162904 | orchestrator | 2026-01-17 00:56:41.162911 | orchestrator | TASK [include_role : rabbitmq] ************************************************* 2026-01-17 00:56:41.162919 | orchestrator | Saturday 17 January 2026 00:55:32 +0000 (0:00:01.486) 0:05:35.634 ****** 2026-01-17 00:56:41.162926 | orchestrator | included: rabbitmq for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-17 00:56:41.162934 | orchestrator | 2026-01-17 00:56:41.162942 | orchestrator | TASK [haproxy-config : Copying over rabbitmq haproxy config] ******************* 2026-01-17 00:56:41.162950 | orchestrator | Saturday 17 January 2026 00:55:34 +0000 (0:00:01.868) 0:05:37.503 ****** 2026-01-17 00:56:41.162958 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-01-17 00:56:41.162975 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-01-17 00:56:41.162994 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-01-17 00:56:41.163004 | orchestrator | 2026-01-17 00:56:41.163012 | orchestrator | TASK [haproxy-config : Add configuration for rabbitmq when using single external frontend] *** 2026-01-17 00:56:41.163019 | orchestrator | Saturday 17 January 2026 00:55:36 +0000 (0:00:02.660) 0:05:40.164 ****** 2026-01-17 00:56:41.163027 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-01-17 00:56:41.163036 | orchestrator | skipping: [testbed-node-0] 2026-01-17 00:56:41.163044 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-01-17 00:56:41.163082 | orchestrator | skipping: [testbed-node-1] 2026-01-17 00:56:41.163091 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-01-17 00:56:41.163100 | orchestrator | skipping: [testbed-node-2] 2026-01-17 00:56:41.163108 | orchestrator | 2026-01-17 00:56:41.163116 | orchestrator | TASK [haproxy-config : Configuring firewall for rabbitmq] ********************** 2026-01-17 00:56:41.163129 | orchestrator | Saturday 17 January 2026 00:55:37 +0000 (0:00:00.428) 0:05:40.592 ****** 2026-01-17 00:56:41.163138 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2026-01-17 00:56:41.163147 | orchestrator | skipping: [testbed-node-0] 2026-01-17 00:56:41.163160 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2026-01-17 00:56:41.163168 | orchestrator | skipping: [testbed-node-1] 2026-01-17 00:56:41.163177 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2026-01-17 00:56:41.163184 | orchestrator | skipping: [testbed-node-2] 2026-01-17 00:56:41.163192 | orchestrator | 2026-01-17 00:56:41.163201 | orchestrator | TASK [proxysql-config : Copying over rabbitmq ProxySQL users config] *********** 2026-01-17 00:56:41.163209 | orchestrator | Saturday 17 January 2026 00:55:38 +0000 (0:00:01.023) 0:05:41.615 ****** 2026-01-17 00:56:41.163215 | orchestrator | skipping: [testbed-node-0] 2026-01-17 00:56:41.163222 | orchestrator | skipping: [testbed-node-1] 2026-01-17 00:56:41.163230 | orchestrator | skipping: [testbed-node-2] 2026-01-17 00:56:41.163237 | orchestrator | 2026-01-17 00:56:41.163244 | orchestrator | TASK [proxysql-config : Copying over rabbitmq ProxySQL rules config] *********** 2026-01-17 00:56:41.163253 | orchestrator | Saturday 17 January 2026 00:55:38 +0000 (0:00:00.442) 0:05:42.057 ****** 2026-01-17 00:56:41.163260 | orchestrator | skipping: [testbed-node-0] 2026-01-17 00:56:41.163268 | orchestrator | skipping: [testbed-node-1] 2026-01-17 00:56:41.163276 | orchestrator | skipping: [testbed-node-2] 2026-01-17 00:56:41.163284 | orchestrator | 2026-01-17 00:56:41.163292 | orchestrator | TASK [include_role : skyline] ************************************************** 2026-01-17 00:56:41.163300 | orchestrator | Saturday 17 January 2026 00:55:40 +0000 (0:00:01.381) 0:05:43.439 ****** 2026-01-17 00:56:41.163308 | orchestrator | included: skyline for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-17 00:56:41.163323 | orchestrator | 2026-01-17 00:56:41.163330 | orchestrator | TASK [haproxy-config : Copying over skyline haproxy config] ******************** 2026-01-17 00:56:41.163337 | orchestrator | Saturday 17 January 2026 00:55:41 +0000 (0:00:01.845) 0:05:45.284 ****** 2026-01-17 00:56:41.163346 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-01-17 00:56:41.163355 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-01-17 00:56:41.163370 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-01-17 00:56:41.163389 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-01-17 00:56:41.163403 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-01-17 00:56:41.163412 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-01-17 00:56:41.163420 | orchestrator | 2026-01-17 00:56:41.163428 | orchestrator | TASK [haproxy-config : Add configuration for skyline when using single external frontend] *** 2026-01-17 00:56:41.163436 | orchestrator | Saturday 17 January 2026 00:55:48 +0000 (0:00:06.255) 0:05:51.540 ****** 2026-01-17 00:56:41.163449 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2026-01-17 00:56:41.163462 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2026-01-17 00:56:41.163471 | orchestrator | skipping: [testbed-node-0] 2026-01-17 00:56:41.163485 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2026-01-17 00:56:41.163494 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2026-01-17 00:56:41.163503 | orchestrator | skipping: [testbed-node-1] 2026-01-17 00:56:41.163511 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2026-01-17 00:56:41.163528 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2026-01-17 00:56:41.163537 | orchestrator | skipping: [testbed-node-2] 2026-01-17 00:56:41.163545 | orchestrator | 2026-01-17 00:56:41.163550 | orchestrator | TASK [haproxy-config : Configuring firewall for skyline] *********************** 2026-01-17 00:56:41.163559 | orchestrator | Saturday 17 January 2026 00:55:48 +0000 (0:00:00.630) 0:05:52.170 ****** 2026-01-17 00:56:41.163564 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-01-17 00:56:41.163570 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-01-17 00:56:41.163575 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-01-17 00:56:41.163580 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-01-17 00:56:41.163585 | orchestrator | skipping: [testbed-node-0] 2026-01-17 00:56:41.163590 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-01-17 00:56:41.163595 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-01-17 00:56:41.163600 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-01-17 00:56:41.163605 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-01-17 00:56:41.163609 | orchestrator | skipping: [testbed-node-1] 2026-01-17 00:56:41.163614 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-01-17 00:56:41.163619 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-01-17 00:56:41.163624 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-01-17 00:56:41.163629 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-01-17 00:56:41.163634 | orchestrator | skipping: [testbed-node-2] 2026-01-17 00:56:41.163639 | orchestrator | 2026-01-17 00:56:41.163643 | orchestrator | TASK [proxysql-config : Copying over skyline ProxySQL users config] ************ 2026-01-17 00:56:41.163648 | orchestrator | Saturday 17 January 2026 00:55:50 +0000 (0:00:01.685) 0:05:53.856 ****** 2026-01-17 00:56:41.163653 | orchestrator | changed: [testbed-node-0] 2026-01-17 00:56:41.163658 | orchestrator | changed: [testbed-node-1] 2026-01-17 00:56:41.163662 | orchestrator | changed: [testbed-node-2] 2026-01-17 00:56:41.163667 | orchestrator | 2026-01-17 00:56:41.163672 | orchestrator | TASK [proxysql-config : Copying over skyline ProxySQL rules config] ************ 2026-01-17 00:56:41.163681 | orchestrator | Saturday 17 January 2026 00:55:52 +0000 (0:00:01.493) 0:05:55.349 ****** 2026-01-17 00:56:41.163694 | orchestrator | changed: [testbed-node-0] 2026-01-17 00:56:41.163702 | orchestrator | changed: [testbed-node-1] 2026-01-17 00:56:41.163710 | orchestrator | changed: [testbed-node-2] 2026-01-17 00:56:41.163718 | orchestrator | 2026-01-17 00:56:41.163725 | orchestrator | TASK [include_role : swift] **************************************************** 2026-01-17 00:56:41.163733 | orchestrator | Saturday 17 January 2026 00:55:54 +0000 (0:00:02.240) 0:05:57.590 ****** 2026-01-17 00:56:41.163740 | orchestrator | skipping: [testbed-node-0] 2026-01-17 00:56:41.163752 | orchestrator | skipping: [testbed-node-1] 2026-01-17 00:56:41.163760 | orchestrator | skipping: [testbed-node-2] 2026-01-17 00:56:41.163768 | orchestrator | 2026-01-17 00:56:41.163776 | orchestrator | TASK [include_role : tacker] *************************************************** 2026-01-17 00:56:41.163783 | orchestrator | Saturday 17 January 2026 00:55:54 +0000 (0:00:00.333) 0:05:57.924 ****** 2026-01-17 00:56:41.163790 | orchestrator | skipping: [testbed-node-0] 2026-01-17 00:56:41.163798 | orchestrator | skipping: [testbed-node-1] 2026-01-17 00:56:41.163806 | orchestrator | skipping: [testbed-node-2] 2026-01-17 00:56:41.163814 | orchestrator | 2026-01-17 00:56:41.163823 | orchestrator | TASK [include_role : trove] **************************************************** 2026-01-17 00:56:41.163831 | orchestrator | Saturday 17 January 2026 00:55:54 +0000 (0:00:00.317) 0:05:58.241 ****** 2026-01-17 00:56:41.163839 | orchestrator | skipping: [testbed-node-0] 2026-01-17 00:56:41.163847 | orchestrator | skipping: [testbed-node-1] 2026-01-17 00:56:41.163856 | orchestrator | skipping: [testbed-node-2] 2026-01-17 00:56:41.163861 | orchestrator | 2026-01-17 00:56:41.163866 | orchestrator | TASK [include_role : venus] **************************************************** 2026-01-17 00:56:41.163871 | orchestrator | Saturday 17 January 2026 00:55:55 +0000 (0:00:00.699) 0:05:58.941 ****** 2026-01-17 00:56:41.163876 | orchestrator | skipping: [testbed-node-0] 2026-01-17 00:56:41.163881 | orchestrator | skipping: [testbed-node-1] 2026-01-17 00:56:41.163886 | orchestrator | skipping: [testbed-node-2] 2026-01-17 00:56:41.163890 | orchestrator | 2026-01-17 00:56:41.163895 | orchestrator | TASK [include_role : watcher] ************************************************** 2026-01-17 00:56:41.163900 | orchestrator | Saturday 17 January 2026 00:55:55 +0000 (0:00:00.349) 0:05:59.290 ****** 2026-01-17 00:56:41.163905 | orchestrator | skipping: [testbed-node-0] 2026-01-17 00:56:41.163909 | orchestrator | skipping: [testbed-node-1] 2026-01-17 00:56:41.163914 | orchestrator | skipping: [testbed-node-2] 2026-01-17 00:56:41.163919 | orchestrator | 2026-01-17 00:56:41.163924 | orchestrator | TASK [include_role : zun] ****************************************************** 2026-01-17 00:56:41.163929 | orchestrator | Saturday 17 January 2026 00:55:56 +0000 (0:00:00.329) 0:05:59.620 ****** 2026-01-17 00:56:41.163933 | orchestrator | skipping: [testbed-node-0] 2026-01-17 00:56:41.163938 | orchestrator | skipping: [testbed-node-1] 2026-01-17 00:56:41.163943 | orchestrator | skipping: [testbed-node-2] 2026-01-17 00:56:41.163948 | orchestrator | 2026-01-17 00:56:41.163953 | orchestrator | RUNNING HANDLER [loadbalancer : Check IP addresses on the API interface] ******* 2026-01-17 00:56:41.163959 | orchestrator | Saturday 17 January 2026 00:55:57 +0000 (0:00:00.850) 0:06:00.470 ****** 2026-01-17 00:56:41.163967 | orchestrator | ok: [testbed-node-0] 2026-01-17 00:56:41.163975 | orchestrator | ok: [testbed-node-1] 2026-01-17 00:56:41.163983 | orchestrator | ok: [testbed-node-2] 2026-01-17 00:56:41.163990 | orchestrator | 2026-01-17 00:56:41.163997 | orchestrator | RUNNING HANDLER [loadbalancer : Group HA nodes by status] ********************** 2026-01-17 00:56:41.164005 | orchestrator | Saturday 17 January 2026 00:55:57 +0000 (0:00:00.758) 0:06:01.229 ****** 2026-01-17 00:56:41.164012 | orchestrator | ok: [testbed-node-0] 2026-01-17 00:56:41.164020 | orchestrator | ok: [testbed-node-1] 2026-01-17 00:56:41.164027 | orchestrator | ok: [testbed-node-2] 2026-01-17 00:56:41.164034 | orchestrator | 2026-01-17 00:56:41.164042 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup keepalived container] ************** 2026-01-17 00:56:41.164050 | orchestrator | Saturday 17 January 2026 00:55:58 +0000 (0:00:00.368) 0:06:01.598 ****** 2026-01-17 00:56:41.164210 | orchestrator | ok: [testbed-node-0] 2026-01-17 00:56:41.164228 | orchestrator | ok: [testbed-node-1] 2026-01-17 00:56:41.164233 | orchestrator | ok: [testbed-node-2] 2026-01-17 00:56:41.164238 | orchestrator | 2026-01-17 00:56:41.164243 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup haproxy container] ***************** 2026-01-17 00:56:41.164248 | orchestrator | Saturday 17 January 2026 00:55:59 +0000 (0:00:00.992) 0:06:02.590 ****** 2026-01-17 00:56:41.164253 | orchestrator | ok: [testbed-node-0] 2026-01-17 00:56:41.164258 | orchestrator | ok: [testbed-node-1] 2026-01-17 00:56:41.164263 | orchestrator | ok: [testbed-node-2] 2026-01-17 00:56:41.164268 | orchestrator | 2026-01-17 00:56:41.164273 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup proxysql container] **************** 2026-01-17 00:56:41.164277 | orchestrator | Saturday 17 January 2026 00:56:00 +0000 (0:00:01.327) 0:06:03.918 ****** 2026-01-17 00:56:41.164282 | orchestrator | ok: [testbed-node-0] 2026-01-17 00:56:41.164287 | orchestrator | ok: [testbed-node-1] 2026-01-17 00:56:41.164292 | orchestrator | ok: [testbed-node-2] 2026-01-17 00:56:41.164297 | orchestrator | 2026-01-17 00:56:41.164302 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup haproxy container] **************** 2026-01-17 00:56:41.164306 | orchestrator | Saturday 17 January 2026 00:56:01 +0000 (0:00:00.964) 0:06:04.882 ****** 2026-01-17 00:56:41.164311 | orchestrator | changed: [testbed-node-0] 2026-01-17 00:56:41.164316 | orchestrator | changed: [testbed-node-1] 2026-01-17 00:56:41.164321 | orchestrator | changed: [testbed-node-2] 2026-01-17 00:56:41.164326 | orchestrator | 2026-01-17 00:56:41.164331 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for backup haproxy to start] ************** 2026-01-17 00:56:41.164335 | orchestrator | Saturday 17 January 2026 00:56:11 +0000 (0:00:09.651) 0:06:14.533 ****** 2026-01-17 00:56:41.164340 | orchestrator | ok: [testbed-node-1] 2026-01-17 00:56:41.164345 | orchestrator | ok: [testbed-node-0] 2026-01-17 00:56:41.164350 | orchestrator | ok: [testbed-node-2] 2026-01-17 00:56:41.164354 | orchestrator | 2026-01-17 00:56:41.164359 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup proxysql container] *************** 2026-01-17 00:56:41.164364 | orchestrator | Saturday 17 January 2026 00:56:11 +0000 (0:00:00.776) 0:06:15.310 ****** 2026-01-17 00:56:41.164369 | orchestrator | changed: [testbed-node-0] 2026-01-17 00:56:41.164374 | orchestrator | changed: [testbed-node-1] 2026-01-17 00:56:41.164378 | orchestrator | changed: [testbed-node-2] 2026-01-17 00:56:41.164383 | orchestrator | 2026-01-17 00:56:41.164388 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for backup proxysql to start] ************* 2026-01-17 00:56:41.164393 | orchestrator | Saturday 17 January 2026 00:56:21 +0000 (0:00:09.514) 0:06:24.825 ****** 2026-01-17 00:56:41.164398 | orchestrator | ok: [testbed-node-0] 2026-01-17 00:56:41.164410 | orchestrator | ok: [testbed-node-1] 2026-01-17 00:56:41.164415 | orchestrator | ok: [testbed-node-2] 2026-01-17 00:56:41.164420 | orchestrator | 2026-01-17 00:56:41.164425 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup keepalived container] ************* 2026-01-17 00:56:41.164430 | orchestrator | Saturday 17 January 2026 00:56:25 +0000 (0:00:04.278) 0:06:29.103 ****** 2026-01-17 00:56:41.164434 | orchestrator | changed: [testbed-node-1] 2026-01-17 00:56:41.164439 | orchestrator | changed: [testbed-node-2] 2026-01-17 00:56:41.164444 | orchestrator | changed: [testbed-node-0] 2026-01-17 00:56:41.164449 | orchestrator | 2026-01-17 00:56:41.164458 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master haproxy container] ***************** 2026-01-17 00:56:41.164463 | orchestrator | Saturday 17 January 2026 00:56:34 +0000 (0:00:08.391) 0:06:37.495 ****** 2026-01-17 00:56:41.164468 | orchestrator | skipping: [testbed-node-0] 2026-01-17 00:56:41.164473 | orchestrator | skipping: [testbed-node-1] 2026-01-17 00:56:41.164477 | orchestrator | skipping: [testbed-node-2] 2026-01-17 00:56:41.164482 | orchestrator | 2026-01-17 00:56:41.164487 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master proxysql container] **************** 2026-01-17 00:56:41.164492 | orchestrator | Saturday 17 January 2026 00:56:34 +0000 (0:00:00.361) 0:06:37.856 ****** 2026-01-17 00:56:41.164496 | orchestrator | skipping: [testbed-node-0] 2026-01-17 00:56:41.164501 | orchestrator | skipping: [testbed-node-1] 2026-01-17 00:56:41.164506 | orchestrator | skipping: [testbed-node-2] 2026-01-17 00:56:41.164515 | orchestrator | 2026-01-17 00:56:41.164520 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master keepalived container] ************** 2026-01-17 00:56:41.164525 | orchestrator | Saturday 17 January 2026 00:56:34 +0000 (0:00:00.366) 0:06:38.223 ****** 2026-01-17 00:56:41.164529 | orchestrator | skipping: [testbed-node-0] 2026-01-17 00:56:41.164534 | orchestrator | skipping: [testbed-node-1] 2026-01-17 00:56:41.164539 | orchestrator | skipping: [testbed-node-2] 2026-01-17 00:56:41.164544 | orchestrator | 2026-01-17 00:56:41.164548 | orchestrator | RUNNING HANDLER [loadbalancer : Start master haproxy container] **************** 2026-01-17 00:56:41.164553 | orchestrator | Saturday 17 January 2026 00:56:35 +0000 (0:00:00.691) 0:06:38.914 ****** 2026-01-17 00:56:41.164558 | orchestrator | skipping: [testbed-node-0] 2026-01-17 00:56:41.164563 | orchestrator | skipping: [testbed-node-1] 2026-01-17 00:56:41.164567 | orchestrator | skipping: [testbed-node-2] 2026-01-17 00:56:41.164572 | orchestrator | 2026-01-17 00:56:41.164577 | orchestrator | RUNNING HANDLER [loadbalancer : Start master proxysql container] *************** 2026-01-17 00:56:41.164582 | orchestrator | Saturday 17 January 2026 00:56:35 +0000 (0:00:00.361) 0:06:39.276 ****** 2026-01-17 00:56:41.164586 | orchestrator | skipping: [testbed-node-0] 2026-01-17 00:56:41.164591 | orchestrator | skipping: [testbed-node-1] 2026-01-17 00:56:41.164596 | orchestrator | skipping: [testbed-node-2] 2026-01-17 00:56:41.164601 | orchestrator | 2026-01-17 00:56:41.164605 | orchestrator | RUNNING HANDLER [loadbalancer : Start master keepalived container] ************* 2026-01-17 00:56:41.164610 | orchestrator | Saturday 17 January 2026 00:56:36 +0000 (0:00:00.366) 0:06:39.642 ****** 2026-01-17 00:56:41.164615 | orchestrator | skipping: [testbed-node-0] 2026-01-17 00:56:41.164620 | orchestrator | skipping: [testbed-node-1] 2026-01-17 00:56:41.164624 | orchestrator | skipping: [testbed-node-2] 2026-01-17 00:56:41.164629 | orchestrator | 2026-01-17 00:56:41.164634 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for haproxy to listen on VIP] ************* 2026-01-17 00:56:41.164639 | orchestrator | Saturday 17 January 2026 00:56:36 +0000 (0:00:00.358) 0:06:40.001 ****** 2026-01-17 00:56:41.164643 | orchestrator | ok: [testbed-node-0] 2026-01-17 00:56:41.164648 | orchestrator | ok: [testbed-node-1] 2026-01-17 00:56:41.164653 | orchestrator | ok: [testbed-node-2] 2026-01-17 00:56:41.164658 | orchestrator | 2026-01-17 00:56:41.164662 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for proxysql to listen on VIP] ************ 2026-01-17 00:56:41.164667 | orchestrator | Saturday 17 January 2026 00:56:38 +0000 (0:00:01.370) 0:06:41.372 ****** 2026-01-17 00:56:41.164672 | orchestrator | ok: [testbed-node-0] 2026-01-17 00:56:41.164676 | orchestrator | ok: [testbed-node-1] 2026-01-17 00:56:41.164681 | orchestrator | ok: [testbed-node-2] 2026-01-17 00:56:41.164685 | orchestrator | 2026-01-17 00:56:41.164690 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-17 00:56:41.164695 | orchestrator | testbed-node-0 : ok=123  changed=76  unreachable=0 failed=0 skipped=97  rescued=0 ignored=0 2026-01-17 00:56:41.164701 | orchestrator | testbed-node-1 : ok=122  changed=76  unreachable=0 failed=0 skipped=97  rescued=0 ignored=0 2026-01-17 00:56:41.164705 | orchestrator | testbed-node-2 : ok=122  changed=76  unreachable=0 failed=0 skipped=97  rescued=0 ignored=0 2026-01-17 00:56:41.164710 | orchestrator | 2026-01-17 00:56:41.164715 | orchestrator | 2026-01-17 00:56:41.164719 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-17 00:56:41.164724 | orchestrator | Saturday 17 January 2026 00:56:38 +0000 (0:00:00.926) 0:06:42.299 ****** 2026-01-17 00:56:41.164728 | orchestrator | =============================================================================== 2026-01-17 00:56:41.164733 | orchestrator | loadbalancer : Start backup haproxy container --------------------------- 9.65s 2026-01-17 00:56:41.164737 | orchestrator | loadbalancer : Start backup proxysql container -------------------------- 9.51s 2026-01-17 00:56:41.164742 | orchestrator | loadbalancer : Start backup keepalived container ------------------------ 8.39s 2026-01-17 00:56:41.164751 | orchestrator | haproxy-config : Copying over barbican haproxy config ------------------- 6.60s 2026-01-17 00:56:41.164755 | orchestrator | haproxy-config : Copying over skyline haproxy config -------------------- 6.26s 2026-01-17 00:56:41.164760 | orchestrator | haproxy-config : Copying over opensearch haproxy config ----------------- 5.65s 2026-01-17 00:56:41.164764 | orchestrator | haproxy-config : Copying over cinder haproxy config --------------------- 5.55s 2026-01-17 00:56:41.164769 | orchestrator | loadbalancer : Copying over haproxy.cfg --------------------------------- 4.85s 2026-01-17 00:56:41.164773 | orchestrator | haproxy-config : Copying over designate haproxy config ------------------ 4.81s 2026-01-17 00:56:41.164781 | orchestrator | haproxy-config : Copying over prometheus haproxy config ----------------- 4.73s 2026-01-17 00:56:41.164785 | orchestrator | haproxy-config : Copying over nova-cell:nova-novncproxy haproxy config --- 4.68s 2026-01-17 00:56:41.164790 | orchestrator | loadbalancer : Copying over proxysql config ----------------------------- 4.51s 2026-01-17 00:56:41.164794 | orchestrator | haproxy-config : Copying over aodh haproxy config ----------------------- 4.49s 2026-01-17 00:56:41.164801 | orchestrator | haproxy-config : Copying over nova haproxy config ----------------------- 4.42s 2026-01-17 00:56:41.164806 | orchestrator | haproxy-config : Copying over neutron haproxy config -------------------- 4.35s 2026-01-17 00:56:41.164811 | orchestrator | haproxy-config : Copying over manila haproxy config --------------------- 4.33s 2026-01-17 00:56:41.164815 | orchestrator | loadbalancer : Wait for backup proxysql to start ------------------------ 4.28s 2026-01-17 00:56:41.164819 | orchestrator | sysctl : Setting sysctl values ------------------------------------------ 4.22s 2026-01-17 00:56:41.164824 | orchestrator | haproxy-config : Copying over glance haproxy config --------------------- 4.20s 2026-01-17 00:56:41.164828 | orchestrator | loadbalancer : Copying over config.json files for services -------------- 3.93s 2026-01-17 00:56:41.164833 | orchestrator | 2026-01-17 00:56:41 | INFO  | Task 1a06fc1e-dccf-48c7-9307-96e1c4567bb9 is in state STARTED 2026-01-17 00:56:41.164838 | orchestrator | 2026-01-17 00:56:41 | INFO  | Wait 1 second(s) until the next check 2026-01-17 00:56:44.190478 | orchestrator | 2026-01-17 00:56:44 | INFO  | Task a2bd0e71-76fc-40aa-b003-09f7646ecc3c is in state STARTED 2026-01-17 00:56:44.193871 | orchestrator | 2026-01-17 00:56:44 | INFO  | Task 6adcda3a-ea21-49a7-8993-d3a3658b387d is in state STARTED 2026-01-17 00:56:44.195271 | orchestrator | 2026-01-17 00:56:44 | INFO  | Task 1a06fc1e-dccf-48c7-9307-96e1c4567bb9 is in state STARTED 2026-01-17 00:56:44.195309 | orchestrator | 2026-01-17 00:56:44 | INFO  | Wait 1 second(s) until the next check 2026-01-17 00:56:47.239817 | orchestrator | 2026-01-17 00:56:47 | INFO  | Task a2bd0e71-76fc-40aa-b003-09f7646ecc3c is in state STARTED 2026-01-17 00:56:47.240071 | orchestrator | 2026-01-17 00:56:47 | INFO  | Task 6adcda3a-ea21-49a7-8993-d3a3658b387d is in state STARTED 2026-01-17 00:56:47.240768 | orchestrator | 2026-01-17 00:56:47 | INFO  | Task 1a06fc1e-dccf-48c7-9307-96e1c4567bb9 is in state STARTED 2026-01-17 00:56:47.240957 | orchestrator | 2026-01-17 00:56:47 | INFO  | Wait 1 second(s) until the next check 2026-01-17 00:56:50.289425 | orchestrator | 2026-01-17 00:56:50 | INFO  | Task a2bd0e71-76fc-40aa-b003-09f7646ecc3c is in state STARTED 2026-01-17 00:56:50.289544 | orchestrator | 2026-01-17 00:56:50 | INFO  | Task 6adcda3a-ea21-49a7-8993-d3a3658b387d is in state STARTED 2026-01-17 00:56:50.289555 | orchestrator | 2026-01-17 00:56:50 | INFO  | Task 1a06fc1e-dccf-48c7-9307-96e1c4567bb9 is in state STARTED 2026-01-17 00:56:50.289563 | orchestrator | 2026-01-17 00:56:50 | INFO  | Wait 1 second(s) until the next check 2026-01-17 00:56:53.330553 | orchestrator | 2026-01-17 00:56:53 | INFO  | Task a2bd0e71-76fc-40aa-b003-09f7646ecc3c is in state STARTED 2026-01-17 00:56:53.331898 | orchestrator | 2026-01-17 00:56:53 | INFO  | Task 6adcda3a-ea21-49a7-8993-d3a3658b387d is in state STARTED 2026-01-17 00:56:53.332570 | orchestrator | 2026-01-17 00:56:53 | INFO  | Task 1a06fc1e-dccf-48c7-9307-96e1c4567bb9 is in state STARTED 2026-01-17 00:56:53.332877 | orchestrator | 2026-01-17 00:56:53 | INFO  | Wait 1 second(s) until the next check 2026-01-17 00:56:56.393166 | orchestrator | 2026-01-17 00:56:56 | INFO  | Task a2bd0e71-76fc-40aa-b003-09f7646ecc3c is in state STARTED 2026-01-17 00:56:56.393792 | orchestrator | 2026-01-17 00:56:56 | INFO  | Task 6adcda3a-ea21-49a7-8993-d3a3658b387d is in state STARTED 2026-01-17 00:56:56.394510 | orchestrator | 2026-01-17 00:56:56 | INFO  | Task 1a06fc1e-dccf-48c7-9307-96e1c4567bb9 is in state STARTED 2026-01-17 00:56:56.394555 | orchestrator | 2026-01-17 00:56:56 | INFO  | Wait 1 second(s) until the next check 2026-01-17 00:56:59.427633 | orchestrator | 2026-01-17 00:56:59 | INFO  | Task a2bd0e71-76fc-40aa-b003-09f7646ecc3c is in state STARTED 2026-01-17 00:56:59.432017 | orchestrator | 2026-01-17 00:56:59 | INFO  | Task 6adcda3a-ea21-49a7-8993-d3a3658b387d is in state STARTED 2026-01-17 00:56:59.432773 | orchestrator | 2026-01-17 00:56:59 | INFO  | Task 1a06fc1e-dccf-48c7-9307-96e1c4567bb9 is in state STARTED 2026-01-17 00:56:59.432925 | orchestrator | 2026-01-17 00:56:59 | INFO  | Wait 1 second(s) until the next check 2026-01-17 00:57:02.475830 | orchestrator | 2026-01-17 00:57:02 | INFO  | Task a2bd0e71-76fc-40aa-b003-09f7646ecc3c is in state STARTED 2026-01-17 00:57:02.478993 | orchestrator | 2026-01-17 00:57:02 | INFO  | Task 6adcda3a-ea21-49a7-8993-d3a3658b387d is in state STARTED 2026-01-17 00:57:02.479404 | orchestrator | 2026-01-17 00:57:02 | INFO  | Task 1a06fc1e-dccf-48c7-9307-96e1c4567bb9 is in state STARTED 2026-01-17 00:57:02.479430 | orchestrator | 2026-01-17 00:57:02 | INFO  | Wait 1 second(s) until the next check 2026-01-17 00:57:05.509495 | orchestrator | 2026-01-17 00:57:05 | INFO  | Task a2bd0e71-76fc-40aa-b003-09f7646ecc3c is in state STARTED 2026-01-17 00:57:05.511491 | orchestrator | 2026-01-17 00:57:05 | INFO  | Task 6adcda3a-ea21-49a7-8993-d3a3658b387d is in state STARTED 2026-01-17 00:57:05.513346 | orchestrator | 2026-01-17 00:57:05 | INFO  | Task 1a06fc1e-dccf-48c7-9307-96e1c4567bb9 is in state STARTED 2026-01-17 00:57:05.513503 | orchestrator | 2026-01-17 00:57:05 | INFO  | Wait 1 second(s) until the next check 2026-01-17 00:57:08.551917 | orchestrator | 2026-01-17 00:57:08 | INFO  | Task a2bd0e71-76fc-40aa-b003-09f7646ecc3c is in state STARTED 2026-01-17 00:57:08.552138 | orchestrator | 2026-01-17 00:57:08 | INFO  | Task 6adcda3a-ea21-49a7-8993-d3a3658b387d is in state STARTED 2026-01-17 00:57:08.553177 | orchestrator | 2026-01-17 00:57:08 | INFO  | Task 1a06fc1e-dccf-48c7-9307-96e1c4567bb9 is in state STARTED 2026-01-17 00:57:08.553212 | orchestrator | 2026-01-17 00:57:08 | INFO  | Wait 1 second(s) until the next check 2026-01-17 00:57:11.590358 | orchestrator | 2026-01-17 00:57:11 | INFO  | Task a2bd0e71-76fc-40aa-b003-09f7646ecc3c is in state STARTED 2026-01-17 00:57:11.590508 | orchestrator | 2026-01-17 00:57:11 | INFO  | Task 6adcda3a-ea21-49a7-8993-d3a3658b387d is in state STARTED 2026-01-17 00:57:11.592400 | orchestrator | 2026-01-17 00:57:11 | INFO  | Task 1a06fc1e-dccf-48c7-9307-96e1c4567bb9 is in state STARTED 2026-01-17 00:57:11.592459 | orchestrator | 2026-01-17 00:57:11 | INFO  | Wait 1 second(s) until the next check 2026-01-17 00:57:14.631775 | orchestrator | 2026-01-17 00:57:14 | INFO  | Task a2bd0e71-76fc-40aa-b003-09f7646ecc3c is in state STARTED 2026-01-17 00:57:14.633573 | orchestrator | 2026-01-17 00:57:14 | INFO  | Task 6adcda3a-ea21-49a7-8993-d3a3658b387d is in state STARTED 2026-01-17 00:57:14.635080 | orchestrator | 2026-01-17 00:57:14 | INFO  | Task 1a06fc1e-dccf-48c7-9307-96e1c4567bb9 is in state STARTED 2026-01-17 00:57:14.635171 | orchestrator | 2026-01-17 00:57:14 | INFO  | Wait 1 second(s) until the next check 2026-01-17 00:57:17.677034 | orchestrator | 2026-01-17 00:57:17 | INFO  | Task a2bd0e71-76fc-40aa-b003-09f7646ecc3c is in state STARTED 2026-01-17 00:57:17.679815 | orchestrator | 2026-01-17 00:57:17 | INFO  | Task 6adcda3a-ea21-49a7-8993-d3a3658b387d is in state STARTED 2026-01-17 00:57:17.681494 | orchestrator | 2026-01-17 00:57:17 | INFO  | Task 1a06fc1e-dccf-48c7-9307-96e1c4567bb9 is in state STARTED 2026-01-17 00:57:17.681540 | orchestrator | 2026-01-17 00:57:17 | INFO  | Wait 1 second(s) until the next check 2026-01-17 00:57:20.745052 | orchestrator | 2026-01-17 00:57:20 | INFO  | Task a2bd0e71-76fc-40aa-b003-09f7646ecc3c is in state STARTED 2026-01-17 00:57:20.747963 | orchestrator | 2026-01-17 00:57:20 | INFO  | Task 6adcda3a-ea21-49a7-8993-d3a3658b387d is in state STARTED 2026-01-17 00:57:20.750299 | orchestrator | 2026-01-17 00:57:20 | INFO  | Task 1a06fc1e-dccf-48c7-9307-96e1c4567bb9 is in state STARTED 2026-01-17 00:57:20.750337 | orchestrator | 2026-01-17 00:57:20 | INFO  | Wait 1 second(s) until the next check 2026-01-17 00:57:23.794845 | orchestrator | 2026-01-17 00:57:23 | INFO  | Task a2bd0e71-76fc-40aa-b003-09f7646ecc3c is in state STARTED 2026-01-17 00:57:23.794995 | orchestrator | 2026-01-17 00:57:23 | INFO  | Task 6adcda3a-ea21-49a7-8993-d3a3658b387d is in state STARTED 2026-01-17 00:57:23.797849 | orchestrator | 2026-01-17 00:57:23 | INFO  | Task 1a06fc1e-dccf-48c7-9307-96e1c4567bb9 is in state STARTED 2026-01-17 00:57:23.797894 | orchestrator | 2026-01-17 00:57:23 | INFO  | Wait 1 second(s) until the next check 2026-01-17 00:57:26.847301 | orchestrator | 2026-01-17 00:57:26 | INFO  | Task a2bd0e71-76fc-40aa-b003-09f7646ecc3c is in state STARTED 2026-01-17 00:57:26.847846 | orchestrator | 2026-01-17 00:57:26 | INFO  | Task 6adcda3a-ea21-49a7-8993-d3a3658b387d is in state STARTED 2026-01-17 00:57:26.850498 | orchestrator | 2026-01-17 00:57:26 | INFO  | Task 1a06fc1e-dccf-48c7-9307-96e1c4567bb9 is in state STARTED 2026-01-17 00:57:26.850560 | orchestrator | 2026-01-17 00:57:26 | INFO  | Wait 1 second(s) until the next check 2026-01-17 00:57:29.915217 | orchestrator | 2026-01-17 00:57:29 | INFO  | Task a2bd0e71-76fc-40aa-b003-09f7646ecc3c is in state STARTED 2026-01-17 00:57:29.919306 | orchestrator | 2026-01-17 00:57:29 | INFO  | Task 6adcda3a-ea21-49a7-8993-d3a3658b387d is in state STARTED 2026-01-17 00:57:29.922940 | orchestrator | 2026-01-17 00:57:29 | INFO  | Task 1a06fc1e-dccf-48c7-9307-96e1c4567bb9 is in state STARTED 2026-01-17 00:57:29.923138 | orchestrator | 2026-01-17 00:57:29 | INFO  | Wait 1 second(s) until the next check 2026-01-17 00:57:33.018834 | orchestrator | 2026-01-17 00:57:33 | INFO  | Task a2bd0e71-76fc-40aa-b003-09f7646ecc3c is in state STARTED 2026-01-17 00:57:33.021750 | orchestrator | 2026-01-17 00:57:33 | INFO  | Task 6adcda3a-ea21-49a7-8993-d3a3658b387d is in state STARTED 2026-01-17 00:57:33.021799 | orchestrator | 2026-01-17 00:57:33 | INFO  | Task 1a06fc1e-dccf-48c7-9307-96e1c4567bb9 is in state STARTED 2026-01-17 00:57:33.021805 | orchestrator | 2026-01-17 00:57:33 | INFO  | Wait 1 second(s) until the next check 2026-01-17 00:57:36.064520 | orchestrator | 2026-01-17 00:57:36 | INFO  | Task a2bd0e71-76fc-40aa-b003-09f7646ecc3c is in state STARTED 2026-01-17 00:57:36.064955 | orchestrator | 2026-01-17 00:57:36 | INFO  | Task 6adcda3a-ea21-49a7-8993-d3a3658b387d is in state STARTED 2026-01-17 00:57:36.066639 | orchestrator | 2026-01-17 00:57:36 | INFO  | Task 1a06fc1e-dccf-48c7-9307-96e1c4567bb9 is in state STARTED 2026-01-17 00:57:36.066689 | orchestrator | 2026-01-17 00:57:36 | INFO  | Wait 1 second(s) until the next check 2026-01-17 00:57:39.120077 | orchestrator | 2026-01-17 00:57:39 | INFO  | Task a2bd0e71-76fc-40aa-b003-09f7646ecc3c is in state STARTED 2026-01-17 00:57:39.121868 | orchestrator | 2026-01-17 00:57:39 | INFO  | Task 6adcda3a-ea21-49a7-8993-d3a3658b387d is in state STARTED 2026-01-17 00:57:39.123450 | orchestrator | 2026-01-17 00:57:39 | INFO  | Task 1a06fc1e-dccf-48c7-9307-96e1c4567bb9 is in state STARTED 2026-01-17 00:57:39.123494 | orchestrator | 2026-01-17 00:57:39 | INFO  | Wait 1 second(s) until the next check 2026-01-17 00:57:42.165201 | orchestrator | 2026-01-17 00:57:42 | INFO  | Task a2bd0e71-76fc-40aa-b003-09f7646ecc3c is in state STARTED 2026-01-17 00:57:42.166825 | orchestrator | 2026-01-17 00:57:42 | INFO  | Task 6adcda3a-ea21-49a7-8993-d3a3658b387d is in state STARTED 2026-01-17 00:57:42.168764 | orchestrator | 2026-01-17 00:57:42 | INFO  | Task 1a06fc1e-dccf-48c7-9307-96e1c4567bb9 is in state STARTED 2026-01-17 00:57:42.168800 | orchestrator | 2026-01-17 00:57:42 | INFO  | Wait 1 second(s) until the next check 2026-01-17 00:57:45.211878 | orchestrator | 2026-01-17 00:57:45 | INFO  | Task a2bd0e71-76fc-40aa-b003-09f7646ecc3c is in state STARTED 2026-01-17 00:57:45.214483 | orchestrator | 2026-01-17 00:57:45 | INFO  | Task 6adcda3a-ea21-49a7-8993-d3a3658b387d is in state STARTED 2026-01-17 00:57:45.216167 | orchestrator | 2026-01-17 00:57:45 | INFO  | Task 1a06fc1e-dccf-48c7-9307-96e1c4567bb9 is in state STARTED 2026-01-17 00:57:45.216229 | orchestrator | 2026-01-17 00:57:45 | INFO  | Wait 1 second(s) until the next check 2026-01-17 00:57:48.272349 | orchestrator | 2026-01-17 00:57:48 | INFO  | Task a2bd0e71-76fc-40aa-b003-09f7646ecc3c is in state STARTED 2026-01-17 00:57:48.273214 | orchestrator | 2026-01-17 00:57:48 | INFO  | Task 6adcda3a-ea21-49a7-8993-d3a3658b387d is in state STARTED 2026-01-17 00:57:48.275606 | orchestrator | 2026-01-17 00:57:48 | INFO  | Task 1a06fc1e-dccf-48c7-9307-96e1c4567bb9 is in state STARTED 2026-01-17 00:57:48.275631 | orchestrator | 2026-01-17 00:57:48 | INFO  | Wait 1 second(s) until the next check 2026-01-17 00:57:51.332728 | orchestrator | 2026-01-17 00:57:51 | INFO  | Task a2bd0e71-76fc-40aa-b003-09f7646ecc3c is in state STARTED 2026-01-17 00:57:51.332830 | orchestrator | 2026-01-17 00:57:51 | INFO  | Task 6adcda3a-ea21-49a7-8993-d3a3658b387d is in state STARTED 2026-01-17 00:57:51.333226 | orchestrator | 2026-01-17 00:57:51 | INFO  | Task 1a06fc1e-dccf-48c7-9307-96e1c4567bb9 is in state STARTED 2026-01-17 00:57:51.333443 | orchestrator | 2026-01-17 00:57:51 | INFO  | Wait 1 second(s) until the next check 2026-01-17 00:57:54.384871 | orchestrator | 2026-01-17 00:57:54 | INFO  | Task a2bd0e71-76fc-40aa-b003-09f7646ecc3c is in state STARTED 2026-01-17 00:57:54.385830 | orchestrator | 2026-01-17 00:57:54 | INFO  | Task 6adcda3a-ea21-49a7-8993-d3a3658b387d is in state STARTED 2026-01-17 00:57:54.386526 | orchestrator | 2026-01-17 00:57:54 | INFO  | Task 1a06fc1e-dccf-48c7-9307-96e1c4567bb9 is in state STARTED 2026-01-17 00:57:54.386862 | orchestrator | 2026-01-17 00:57:54 | INFO  | Wait 1 second(s) until the next check 2026-01-17 00:57:57.436094 | orchestrator | 2026-01-17 00:57:57 | INFO  | Task a2bd0e71-76fc-40aa-b003-09f7646ecc3c is in state STARTED 2026-01-17 00:57:57.437567 | orchestrator | 2026-01-17 00:57:57 | INFO  | Task 6adcda3a-ea21-49a7-8993-d3a3658b387d is in state STARTED 2026-01-17 00:57:57.439818 | orchestrator | 2026-01-17 00:57:57 | INFO  | Task 1a06fc1e-dccf-48c7-9307-96e1c4567bb9 is in state STARTED 2026-01-17 00:57:57.439865 | orchestrator | 2026-01-17 00:57:57 | INFO  | Wait 1 second(s) until the next check 2026-01-17 00:58:00.493184 | orchestrator | 2026-01-17 00:58:00 | INFO  | Task a2bd0e71-76fc-40aa-b003-09f7646ecc3c is in state STARTED 2026-01-17 00:58:00.494090 | orchestrator | 2026-01-17 00:58:00 | INFO  | Task 6adcda3a-ea21-49a7-8993-d3a3658b387d is in state STARTED 2026-01-17 00:58:00.495644 | orchestrator | 2026-01-17 00:58:00 | INFO  | Task 1a06fc1e-dccf-48c7-9307-96e1c4567bb9 is in state STARTED 2026-01-17 00:58:00.495692 | orchestrator | 2026-01-17 00:58:00 | INFO  | Wait 1 second(s) until the next check 2026-01-17 00:58:03.545479 | orchestrator | 2026-01-17 00:58:03 | INFO  | Task a2bd0e71-76fc-40aa-b003-09f7646ecc3c is in state STARTED 2026-01-17 00:58:03.547777 | orchestrator | 2026-01-17 00:58:03 | INFO  | Task 6adcda3a-ea21-49a7-8993-d3a3658b387d is in state STARTED 2026-01-17 00:58:03.549687 | orchestrator | 2026-01-17 00:58:03 | INFO  | Task 1a06fc1e-dccf-48c7-9307-96e1c4567bb9 is in state STARTED 2026-01-17 00:58:03.549737 | orchestrator | 2026-01-17 00:58:03 | INFO  | Wait 1 second(s) until the next check 2026-01-17 00:58:06.601930 | orchestrator | 2026-01-17 00:58:06 | INFO  | Task a2bd0e71-76fc-40aa-b003-09f7646ecc3c is in state STARTED 2026-01-17 00:58:06.603937 | orchestrator | 2026-01-17 00:58:06 | INFO  | Task 6adcda3a-ea21-49a7-8993-d3a3658b387d is in state STARTED 2026-01-17 00:58:06.606171 | orchestrator | 2026-01-17 00:58:06 | INFO  | Task 1a06fc1e-dccf-48c7-9307-96e1c4567bb9 is in state STARTED 2026-01-17 00:58:06.606441 | orchestrator | 2026-01-17 00:58:06 | INFO  | Wait 1 second(s) until the next check 2026-01-17 00:58:09.667740 | orchestrator | 2026-01-17 00:58:09 | INFO  | Task a2bd0e71-76fc-40aa-b003-09f7646ecc3c is in state STARTED 2026-01-17 00:58:09.669674 | orchestrator | 2026-01-17 00:58:09 | INFO  | Task 6adcda3a-ea21-49a7-8993-d3a3658b387d is in state STARTED 2026-01-17 00:58:09.670998 | orchestrator | 2026-01-17 00:58:09 | INFO  | Task 1a06fc1e-dccf-48c7-9307-96e1c4567bb9 is in state STARTED 2026-01-17 00:58:09.671358 | orchestrator | 2026-01-17 00:58:09 | INFO  | Wait 1 second(s) until the next check 2026-01-17 00:58:12.719656 | orchestrator | 2026-01-17 00:58:12 | INFO  | Task a2bd0e71-76fc-40aa-b003-09f7646ecc3c is in state STARTED 2026-01-17 00:58:12.721937 | orchestrator | 2026-01-17 00:58:12 | INFO  | Task 6adcda3a-ea21-49a7-8993-d3a3658b387d is in state STARTED 2026-01-17 00:58:12.724200 | orchestrator | 2026-01-17 00:58:12 | INFO  | Task 1a06fc1e-dccf-48c7-9307-96e1c4567bb9 is in state STARTED 2026-01-17 00:58:12.724239 | orchestrator | 2026-01-17 00:58:12 | INFO  | Wait 1 second(s) until the next check 2026-01-17 00:58:15.772657 | orchestrator | 2026-01-17 00:58:15 | INFO  | Task a2bd0e71-76fc-40aa-b003-09f7646ecc3c is in state STARTED 2026-01-17 00:58:15.774766 | orchestrator | 2026-01-17 00:58:15 | INFO  | Task 6adcda3a-ea21-49a7-8993-d3a3658b387d is in state STARTED 2026-01-17 00:58:15.776654 | orchestrator | 2026-01-17 00:58:15 | INFO  | Task 1a06fc1e-dccf-48c7-9307-96e1c4567bb9 is in state STARTED 2026-01-17 00:58:15.776720 | orchestrator | 2026-01-17 00:58:15 | INFO  | Wait 1 second(s) until the next check 2026-01-17 00:58:18.824403 | orchestrator | 2026-01-17 00:58:18 | INFO  | Task a2bd0e71-76fc-40aa-b003-09f7646ecc3c is in state STARTED 2026-01-17 00:58:18.827055 | orchestrator | 2026-01-17 00:58:18 | INFO  | Task 6adcda3a-ea21-49a7-8993-d3a3658b387d is in state STARTED 2026-01-17 00:58:18.829779 | orchestrator | 2026-01-17 00:58:18 | INFO  | Task 1a06fc1e-dccf-48c7-9307-96e1c4567bb9 is in state STARTED 2026-01-17 00:58:18.829851 | orchestrator | 2026-01-17 00:58:18 | INFO  | Wait 1 second(s) until the next check 2026-01-17 00:58:21.872080 | orchestrator | 2026-01-17 00:58:21 | INFO  | Task a2bd0e71-76fc-40aa-b003-09f7646ecc3c is in state STARTED 2026-01-17 00:58:21.872868 | orchestrator | 2026-01-17 00:58:21 | INFO  | Task 6adcda3a-ea21-49a7-8993-d3a3658b387d is in state STARTED 2026-01-17 00:58:21.874142 | orchestrator | 2026-01-17 00:58:21 | INFO  | Task 1a06fc1e-dccf-48c7-9307-96e1c4567bb9 is in state STARTED 2026-01-17 00:58:21.874179 | orchestrator | 2026-01-17 00:58:21 | INFO  | Wait 1 second(s) until the next check 2026-01-17 00:58:24.919406 | orchestrator | 2026-01-17 00:58:24 | INFO  | Task a2bd0e71-76fc-40aa-b003-09f7646ecc3c is in state STARTED 2026-01-17 00:58:24.920588 | orchestrator | 2026-01-17 00:58:24 | INFO  | Task 6adcda3a-ea21-49a7-8993-d3a3658b387d is in state STARTED 2026-01-17 00:58:24.922488 | orchestrator | 2026-01-17 00:58:24 | INFO  | Task 1a06fc1e-dccf-48c7-9307-96e1c4567bb9 is in state STARTED 2026-01-17 00:58:24.922520 | orchestrator | 2026-01-17 00:58:24 | INFO  | Wait 1 second(s) until the next check 2026-01-17 00:58:27.972885 | orchestrator | 2026-01-17 00:58:27 | INFO  | Task a2bd0e71-76fc-40aa-b003-09f7646ecc3c is in state STARTED 2026-01-17 00:58:27.974964 | orchestrator | 2026-01-17 00:58:27 | INFO  | Task 6adcda3a-ea21-49a7-8993-d3a3658b387d is in state STARTED 2026-01-17 00:58:27.977207 | orchestrator | 2026-01-17 00:58:27 | INFO  | Task 1a06fc1e-dccf-48c7-9307-96e1c4567bb9 is in state STARTED 2026-01-17 00:58:27.977246 | orchestrator | 2026-01-17 00:58:27 | INFO  | Wait 1 second(s) until the next check 2026-01-17 00:58:31.036535 | orchestrator | 2026-01-17 00:58:31 | INFO  | Task a2bd0e71-76fc-40aa-b003-09f7646ecc3c is in state STARTED 2026-01-17 00:58:31.039029 | orchestrator | 2026-01-17 00:58:31 | INFO  | Task 6adcda3a-ea21-49a7-8993-d3a3658b387d is in state STARTED 2026-01-17 00:58:31.041732 | orchestrator | 2026-01-17 00:58:31 | INFO  | Task 1a06fc1e-dccf-48c7-9307-96e1c4567bb9 is in state STARTED 2026-01-17 00:58:31.041801 | orchestrator | 2026-01-17 00:58:31 | INFO  | Wait 1 second(s) until the next check 2026-01-17 00:58:34.089844 | orchestrator | 2026-01-17 00:58:34 | INFO  | Task a2bd0e71-76fc-40aa-b003-09f7646ecc3c is in state STARTED 2026-01-17 00:58:34.091147 | orchestrator | 2026-01-17 00:58:34 | INFO  | Task 6adcda3a-ea21-49a7-8993-d3a3658b387d is in state STARTED 2026-01-17 00:58:34.093691 | orchestrator | 2026-01-17 00:58:34 | INFO  | Task 1a06fc1e-dccf-48c7-9307-96e1c4567bb9 is in state STARTED 2026-01-17 00:58:34.093849 | orchestrator | 2026-01-17 00:58:34 | INFO  | Wait 1 second(s) until the next check 2026-01-17 00:58:37.136582 | orchestrator | 2026-01-17 00:58:37 | INFO  | Task a2bd0e71-76fc-40aa-b003-09f7646ecc3c is in state STARTED 2026-01-17 00:58:37.138650 | orchestrator | 2026-01-17 00:58:37 | INFO  | Task 6adcda3a-ea21-49a7-8993-d3a3658b387d is in state STARTED 2026-01-17 00:58:37.140999 | orchestrator | 2026-01-17 00:58:37 | INFO  | Task 1a06fc1e-dccf-48c7-9307-96e1c4567bb9 is in state STARTED 2026-01-17 00:58:37.141048 | orchestrator | 2026-01-17 00:58:37 | INFO  | Wait 1 second(s) until the next check 2026-01-17 00:58:40.192226 | orchestrator | 2026-01-17 00:58:40 | INFO  | Task a2bd0e71-76fc-40aa-b003-09f7646ecc3c is in state STARTED 2026-01-17 00:58:40.193747 | orchestrator | 2026-01-17 00:58:40 | INFO  | Task 6adcda3a-ea21-49a7-8993-d3a3658b387d is in state STARTED 2026-01-17 00:58:40.195569 | orchestrator | 2026-01-17 00:58:40 | INFO  | Task 1a06fc1e-dccf-48c7-9307-96e1c4567bb9 is in state STARTED 2026-01-17 00:58:40.195631 | orchestrator | 2026-01-17 00:58:40 | INFO  | Wait 1 second(s) until the next check 2026-01-17 00:58:43.247755 | orchestrator | 2026-01-17 00:58:43 | INFO  | Task a2bd0e71-76fc-40aa-b003-09f7646ecc3c is in state STARTED 2026-01-17 00:58:43.249234 | orchestrator | 2026-01-17 00:58:43 | INFO  | Task 6adcda3a-ea21-49a7-8993-d3a3658b387d is in state STARTED 2026-01-17 00:58:43.251210 | orchestrator | 2026-01-17 00:58:43 | INFO  | Task 1a06fc1e-dccf-48c7-9307-96e1c4567bb9 is in state STARTED 2026-01-17 00:58:43.251257 | orchestrator | 2026-01-17 00:58:43 | INFO  | Wait 1 second(s) until the next check 2026-01-17 00:58:46.304518 | orchestrator | 2026-01-17 00:58:46 | INFO  | Task a2bd0e71-76fc-40aa-b003-09f7646ecc3c is in state STARTED 2026-01-17 00:58:46.307720 | orchestrator | 2026-01-17 00:58:46 | INFO  | Task 6adcda3a-ea21-49a7-8993-d3a3658b387d is in state STARTED 2026-01-17 00:58:46.310782 | orchestrator | 2026-01-17 00:58:46 | INFO  | Task 1a06fc1e-dccf-48c7-9307-96e1c4567bb9 is in state STARTED 2026-01-17 00:58:46.310853 | orchestrator | 2026-01-17 00:58:46 | INFO  | Wait 1 second(s) until the next check 2026-01-17 00:58:49.354280 | orchestrator | 2026-01-17 00:58:49 | INFO  | Task a2bd0e71-76fc-40aa-b003-09f7646ecc3c is in state STARTED 2026-01-17 00:58:49.355261 | orchestrator | 2026-01-17 00:58:49 | INFO  | Task 6adcda3a-ea21-49a7-8993-d3a3658b387d is in state STARTED 2026-01-17 00:58:49.356577 | orchestrator | 2026-01-17 00:58:49 | INFO  | Task 1a06fc1e-dccf-48c7-9307-96e1c4567bb9 is in state STARTED 2026-01-17 00:58:49.356624 | orchestrator | 2026-01-17 00:58:49 | INFO  | Wait 1 second(s) until the next check 2026-01-17 00:58:52.410876 | orchestrator | 2026-01-17 00:58:52 | INFO  | Task a2bd0e71-76fc-40aa-b003-09f7646ecc3c is in state STARTED 2026-01-17 00:58:52.413396 | orchestrator | 2026-01-17 00:58:52 | INFO  | Task 6adcda3a-ea21-49a7-8993-d3a3658b387d is in state STARTED 2026-01-17 00:58:52.415890 | orchestrator | 2026-01-17 00:58:52 | INFO  | Task 1a06fc1e-dccf-48c7-9307-96e1c4567bb9 is in state STARTED 2026-01-17 00:58:52.415962 | orchestrator | 2026-01-17 00:58:52 | INFO  | Wait 1 second(s) until the next check 2026-01-17 00:58:55.462652 | orchestrator | 2026-01-17 00:58:55 | INFO  | Task a2bd0e71-76fc-40aa-b003-09f7646ecc3c is in state STARTED 2026-01-17 00:58:55.466009 | orchestrator | 2026-01-17 00:58:55 | INFO  | Task 6adcda3a-ea21-49a7-8993-d3a3658b387d is in state STARTED 2026-01-17 00:58:55.468330 | orchestrator | 2026-01-17 00:58:55 | INFO  | Task 1a06fc1e-dccf-48c7-9307-96e1c4567bb9 is in state STARTED 2026-01-17 00:58:55.468401 | orchestrator | 2026-01-17 00:58:55 | INFO  | Wait 1 second(s) until the next check 2026-01-17 00:58:58.525204 | orchestrator | 2026-01-17 00:58:58 | INFO  | Task a2bd0e71-76fc-40aa-b003-09f7646ecc3c is in state STARTED 2026-01-17 00:58:58.526826 | orchestrator | 2026-01-17 00:58:58 | INFO  | Task 6adcda3a-ea21-49a7-8993-d3a3658b387d is in state STARTED 2026-01-17 00:58:58.530306 | orchestrator | 2026-01-17 00:58:58 | INFO  | Task 1a06fc1e-dccf-48c7-9307-96e1c4567bb9 is in state STARTED 2026-01-17 00:58:58.530477 | orchestrator | 2026-01-17 00:58:58 | INFO  | Wait 1 second(s) until the next check 2026-01-17 00:59:01.584228 | orchestrator | 2026-01-17 00:59:01 | INFO  | Task bb757c37-4951-45ee-97d6-7030351d7249 is in state STARTED 2026-01-17 00:59:01.587201 | orchestrator | 2026-01-17 00:59:01 | INFO  | Task a2bd0e71-76fc-40aa-b003-09f7646ecc3c is in state STARTED 2026-01-17 00:59:01.589024 | orchestrator | 2026-01-17 00:59:01 | INFO  | Task 6adcda3a-ea21-49a7-8993-d3a3658b387d is in state STARTED 2026-01-17 00:59:01.599197 | orchestrator | 2026-01-17 00:59:01 | INFO  | Task 1a06fc1e-dccf-48c7-9307-96e1c4567bb9 is in state SUCCESS 2026-01-17 00:59:01.601307 | orchestrator | 2026-01-17 00:59:01.601352 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-01-17 00:59:01.601361 | orchestrator | 2.16.14 2026-01-17 00:59:01.601378 | orchestrator | 2026-01-17 00:59:01.601383 | orchestrator | PLAY [Prepare deployment of Ceph services] ************************************* 2026-01-17 00:59:01.601387 | orchestrator | 2026-01-17 00:59:01.601391 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-01-17 00:59:01.601395 | orchestrator | Saturday 17 January 2026 00:47:28 +0000 (0:00:00.890) 0:00:00.890 ****** 2026-01-17 00:59:01.601400 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-17 00:59:01.601404 | orchestrator | 2026-01-17 00:59:01.601408 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-01-17 00:59:01.601412 | orchestrator | Saturday 17 January 2026 00:47:29 +0000 (0:00:01.088) 0:00:01.978 ****** 2026-01-17 00:59:01.601416 | orchestrator | ok: [testbed-node-3] 2026-01-17 00:59:01.601420 | orchestrator | ok: [testbed-node-5] 2026-01-17 00:59:01.601450 | orchestrator | ok: [testbed-node-4] 2026-01-17 00:59:01.601455 | orchestrator | ok: [testbed-node-0] 2026-01-17 00:59:01.601458 | orchestrator | ok: [testbed-node-1] 2026-01-17 00:59:01.601462 | orchestrator | ok: [testbed-node-2] 2026-01-17 00:59:01.601466 | orchestrator | 2026-01-17 00:59:01.601470 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-01-17 00:59:01.601474 | orchestrator | Saturday 17 January 2026 00:47:31 +0000 (0:00:01.812) 0:00:03.791 ****** 2026-01-17 00:59:01.601477 | orchestrator | ok: [testbed-node-3] 2026-01-17 00:59:01.601481 | orchestrator | ok: [testbed-node-4] 2026-01-17 00:59:01.601485 | orchestrator | ok: [testbed-node-5] 2026-01-17 00:59:01.601489 | orchestrator | ok: [testbed-node-0] 2026-01-17 00:59:01.601492 | orchestrator | ok: [testbed-node-1] 2026-01-17 00:59:01.601503 | orchestrator | ok: [testbed-node-2] 2026-01-17 00:59:01.601510 | orchestrator | 2026-01-17 00:59:01.601514 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-01-17 00:59:01.601518 | orchestrator | Saturday 17 January 2026 00:47:31 +0000 (0:00:00.669) 0:00:04.460 ****** 2026-01-17 00:59:01.601522 | orchestrator | ok: [testbed-node-3] 2026-01-17 00:59:01.601526 | orchestrator | ok: [testbed-node-4] 2026-01-17 00:59:01.601529 | orchestrator | ok: [testbed-node-5] 2026-01-17 00:59:01.601533 | orchestrator | ok: [testbed-node-0] 2026-01-17 00:59:01.601537 | orchestrator | ok: [testbed-node-1] 2026-01-17 00:59:01.601541 | orchestrator | ok: [testbed-node-2] 2026-01-17 00:59:01.601545 | orchestrator | 2026-01-17 00:59:01.601549 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-01-17 00:59:01.601552 | orchestrator | Saturday 17 January 2026 00:47:32 +0000 (0:00:00.924) 0:00:05.385 ****** 2026-01-17 00:59:01.601556 | orchestrator | ok: [testbed-node-3] 2026-01-17 00:59:01.601560 | orchestrator | ok: [testbed-node-4] 2026-01-17 00:59:01.601563 | orchestrator | ok: [testbed-node-5] 2026-01-17 00:59:01.601567 | orchestrator | ok: [testbed-node-0] 2026-01-17 00:59:01.601576 | orchestrator | ok: [testbed-node-1] 2026-01-17 00:59:01.601583 | orchestrator | ok: [testbed-node-2] 2026-01-17 00:59:01.601587 | orchestrator | 2026-01-17 00:59:01.601591 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-01-17 00:59:01.601595 | orchestrator | Saturday 17 January 2026 00:47:33 +0000 (0:00:00.748) 0:00:06.134 ****** 2026-01-17 00:59:01.601599 | orchestrator | ok: [testbed-node-3] 2026-01-17 00:59:01.602561 | orchestrator | ok: [testbed-node-4] 2026-01-17 00:59:01.602580 | orchestrator | ok: [testbed-node-5] 2026-01-17 00:59:01.602584 | orchestrator | ok: [testbed-node-0] 2026-01-17 00:59:01.602588 | orchestrator | ok: [testbed-node-1] 2026-01-17 00:59:01.602592 | orchestrator | ok: [testbed-node-2] 2026-01-17 00:59:01.602595 | orchestrator | 2026-01-17 00:59:01.602600 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-01-17 00:59:01.602604 | orchestrator | Saturday 17 January 2026 00:47:34 +0000 (0:00:00.712) 0:00:06.846 ****** 2026-01-17 00:59:01.602608 | orchestrator | ok: [testbed-node-3] 2026-01-17 00:59:01.602626 | orchestrator | ok: [testbed-node-4] 2026-01-17 00:59:01.602631 | orchestrator | ok: [testbed-node-5] 2026-01-17 00:59:01.602642 | orchestrator | ok: [testbed-node-0] 2026-01-17 00:59:01.602646 | orchestrator | ok: [testbed-node-1] 2026-01-17 00:59:01.602649 | orchestrator | ok: [testbed-node-2] 2026-01-17 00:59:01.602653 | orchestrator | 2026-01-17 00:59:01.602657 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-01-17 00:59:01.602661 | orchestrator | Saturday 17 January 2026 00:47:35 +0000 (0:00:00.985) 0:00:07.832 ****** 2026-01-17 00:59:01.602665 | orchestrator | skipping: [testbed-node-3] 2026-01-17 00:59:01.602672 | orchestrator | skipping: [testbed-node-4] 2026-01-17 00:59:01.602676 | orchestrator | skipping: [testbed-node-5] 2026-01-17 00:59:01.602680 | orchestrator | skipping: [testbed-node-0] 2026-01-17 00:59:01.602688 | orchestrator | skipping: [testbed-node-1] 2026-01-17 00:59:01.602692 | orchestrator | skipping: [testbed-node-2] 2026-01-17 00:59:01.602696 | orchestrator | 2026-01-17 00:59:01.602700 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-01-17 00:59:01.602703 | orchestrator | Saturday 17 January 2026 00:47:36 +0000 (0:00:00.818) 0:00:08.650 ****** 2026-01-17 00:59:01.602707 | orchestrator | ok: [testbed-node-3] 2026-01-17 00:59:01.602711 | orchestrator | ok: [testbed-node-4] 2026-01-17 00:59:01.602715 | orchestrator | ok: [testbed-node-5] 2026-01-17 00:59:01.602718 | orchestrator | ok: [testbed-node-0] 2026-01-17 00:59:01.602722 | orchestrator | ok: [testbed-node-1] 2026-01-17 00:59:01.602726 | orchestrator | ok: [testbed-node-2] 2026-01-17 00:59:01.602730 | orchestrator | 2026-01-17 00:59:01.602750 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-01-17 00:59:01.602754 | orchestrator | Saturday 17 January 2026 00:47:37 +0000 (0:00:00.940) 0:00:09.591 ****** 2026-01-17 00:59:01.602758 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-01-17 00:59:01.602762 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-01-17 00:59:01.602766 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-01-17 00:59:01.602770 | orchestrator | 2026-01-17 00:59:01.602774 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-01-17 00:59:01.602777 | orchestrator | Saturday 17 January 2026 00:47:37 +0000 (0:00:00.703) 0:00:10.295 ****** 2026-01-17 00:59:01.602781 | orchestrator | ok: [testbed-node-3] 2026-01-17 00:59:01.602785 | orchestrator | ok: [testbed-node-5] 2026-01-17 00:59:01.602789 | orchestrator | ok: [testbed-node-4] 2026-01-17 00:59:01.602802 | orchestrator | ok: [testbed-node-1] 2026-01-17 00:59:01.602806 | orchestrator | ok: [testbed-node-0] 2026-01-17 00:59:01.602810 | orchestrator | ok: [testbed-node-2] 2026-01-17 00:59:01.602814 | orchestrator | 2026-01-17 00:59:01.602818 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-01-17 00:59:01.602821 | orchestrator | Saturday 17 January 2026 00:47:39 +0000 (0:00:01.440) 0:00:11.736 ****** 2026-01-17 00:59:01.602825 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-01-17 00:59:01.602829 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-01-17 00:59:01.602833 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-01-17 00:59:01.602836 | orchestrator | 2026-01-17 00:59:01.602840 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-01-17 00:59:01.602844 | orchestrator | Saturday 17 January 2026 00:47:41 +0000 (0:00:02.690) 0:00:14.426 ****** 2026-01-17 00:59:01.602848 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-01-17 00:59:01.602851 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-01-17 00:59:01.602855 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-01-17 00:59:01.602859 | orchestrator | skipping: [testbed-node-3] 2026-01-17 00:59:01.602863 | orchestrator | 2026-01-17 00:59:01.602866 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-01-17 00:59:01.602870 | orchestrator | Saturday 17 January 2026 00:47:42 +0000 (0:00:00.560) 0:00:14.987 ****** 2026-01-17 00:59:01.602878 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-01-17 00:59:01.602884 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-01-17 00:59:01.602887 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-01-17 00:59:01.602891 | orchestrator | skipping: [testbed-node-3] 2026-01-17 00:59:01.602895 | orchestrator | 2026-01-17 00:59:01.602899 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-01-17 00:59:01.602903 | orchestrator | Saturday 17 January 2026 00:47:43 +0000 (0:00:01.148) 0:00:16.135 ****** 2026-01-17 00:59:01.602921 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-01-17 00:59:01.602929 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-01-17 00:59:01.602938 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-01-17 00:59:01.602943 | orchestrator | skipping: [testbed-node-3] 2026-01-17 00:59:01.602947 | orchestrator | 2026-01-17 00:59:01.602950 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-01-17 00:59:01.602954 | orchestrator | Saturday 17 January 2026 00:47:44 +0000 (0:00:00.666) 0:00:16.802 ****** 2026-01-17 00:59:01.602963 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-01-17 00:47:39.926115', 'end': '2026-01-17 00:47:40.139013', 'delta': '0:00:00.212898', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-01-17 00:59:01.602968 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-01-17 00:47:40.764427', 'end': '2026-01-17 00:47:40.977740', 'delta': '0:00:00.213313', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-01-17 00:59:01.602975 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-01-17 00:47:41.414787', 'end': '2026-01-17 00:47:41.606673', 'delta': '0:00:00.191886', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-01-17 00:59:01.602979 | orchestrator | skipping: [testbed-node-3] 2026-01-17 00:59:01.602983 | orchestrator | 2026-01-17 00:59:01.602987 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-01-17 00:59:01.602991 | orchestrator | Saturday 17 January 2026 00:47:44 +0000 (0:00:00.203) 0:00:17.005 ****** 2026-01-17 00:59:01.602995 | orchestrator | ok: [testbed-node-5] 2026-01-17 00:59:01.602998 | orchestrator | ok: [testbed-node-4] 2026-01-17 00:59:01.603002 | orchestrator | ok: [testbed-node-3] 2026-01-17 00:59:01.603006 | orchestrator | ok: [testbed-node-0] 2026-01-17 00:59:01.603031 | orchestrator | ok: [testbed-node-1] 2026-01-17 00:59:01.603035 | orchestrator | ok: [testbed-node-2] 2026-01-17 00:59:01.603039 | orchestrator | 2026-01-17 00:59:01.603043 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-01-17 00:59:01.603047 | orchestrator | Saturday 17 January 2026 00:47:46 +0000 (0:00:02.414) 0:00:19.420 ****** 2026-01-17 00:59:01.603051 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-01-17 00:59:01.603054 | orchestrator | 2026-01-17 00:59:01.603058 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-01-17 00:59:01.603079 | orchestrator | Saturday 17 January 2026 00:47:47 +0000 (0:00:00.759) 0:00:20.179 ****** 2026-01-17 00:59:01.603084 | orchestrator | skipping: [testbed-node-3] 2026-01-17 00:59:01.603088 | orchestrator | skipping: [testbed-node-4] 2026-01-17 00:59:01.603091 | orchestrator | skipping: [testbed-node-5] 2026-01-17 00:59:01.603095 | orchestrator | skipping: [testbed-node-0] 2026-01-17 00:59:01.603099 | orchestrator | skipping: [testbed-node-1] 2026-01-17 00:59:01.603103 | orchestrator | skipping: [testbed-node-2] 2026-01-17 00:59:01.603120 | orchestrator | 2026-01-17 00:59:01.603125 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-01-17 00:59:01.603128 | orchestrator | Saturday 17 January 2026 00:47:49 +0000 (0:00:01.899) 0:00:22.079 ****** 2026-01-17 00:59:01.603132 | orchestrator | skipping: [testbed-node-4] 2026-01-17 00:59:01.603136 | orchestrator | skipping: [testbed-node-3] 2026-01-17 00:59:01.603142 | orchestrator | skipping: [testbed-node-5] 2026-01-17 00:59:01.603146 | orchestrator | skipping: [testbed-node-0] 2026-01-17 00:59:01.603149 | orchestrator | skipping: [testbed-node-1] 2026-01-17 00:59:01.603153 | orchestrator | skipping: [testbed-node-2] 2026-01-17 00:59:01.603157 | orchestrator | 2026-01-17 00:59:01.603161 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-01-17 00:59:01.603164 | orchestrator | Saturday 17 January 2026 00:47:52 +0000 (0:00:02.784) 0:00:24.863 ****** 2026-01-17 00:59:01.603168 | orchestrator | skipping: [testbed-node-3] 2026-01-17 00:59:01.603172 | orchestrator | skipping: [testbed-node-4] 2026-01-17 00:59:01.603176 | orchestrator | skipping: [testbed-node-5] 2026-01-17 00:59:01.603179 | orchestrator | skipping: [testbed-node-0] 2026-01-17 00:59:01.603183 | orchestrator | skipping: [testbed-node-1] 2026-01-17 00:59:01.603196 | orchestrator | skipping: [testbed-node-2] 2026-01-17 00:59:01.603200 | orchestrator | 2026-01-17 00:59:01.603204 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-01-17 00:59:01.603222 | orchestrator | Saturday 17 January 2026 00:47:53 +0000 (0:00:00.862) 0:00:25.725 ****** 2026-01-17 00:59:01.603227 | orchestrator | skipping: [testbed-node-3] 2026-01-17 00:59:01.603232 | orchestrator | 2026-01-17 00:59:01.603237 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-01-17 00:59:01.603241 | orchestrator | Saturday 17 January 2026 00:47:53 +0000 (0:00:00.156) 0:00:25.882 ****** 2026-01-17 00:59:01.603245 | orchestrator | skipping: [testbed-node-3] 2026-01-17 00:59:01.603250 | orchestrator | 2026-01-17 00:59:01.603254 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-01-17 00:59:01.603259 | orchestrator | Saturday 17 January 2026 00:47:53 +0000 (0:00:00.258) 0:00:26.140 ****** 2026-01-17 00:59:01.603263 | orchestrator | skipping: [testbed-node-3] 2026-01-17 00:59:01.603268 | orchestrator | skipping: [testbed-node-5] 2026-01-17 00:59:01.603272 | orchestrator | skipping: [testbed-node-4] 2026-01-17 00:59:01.603280 | orchestrator | skipping: [testbed-node-0] 2026-01-17 00:59:01.603284 | orchestrator | skipping: [testbed-node-1] 2026-01-17 00:59:01.603289 | orchestrator | skipping: [testbed-node-2] 2026-01-17 00:59:01.603293 | orchestrator | 2026-01-17 00:59:01.603298 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-01-17 00:59:01.603302 | orchestrator | Saturday 17 January 2026 00:47:54 +0000 (0:00:00.670) 0:00:26.810 ****** 2026-01-17 00:59:01.603306 | orchestrator | skipping: [testbed-node-3] 2026-01-17 00:59:01.603311 | orchestrator | skipping: [testbed-node-4] 2026-01-17 00:59:01.603315 | orchestrator | skipping: [testbed-node-5] 2026-01-17 00:59:01.603319 | orchestrator | skipping: [testbed-node-0] 2026-01-17 00:59:01.603324 | orchestrator | skipping: [testbed-node-1] 2026-01-17 00:59:01.603328 | orchestrator | skipping: [testbed-node-2] 2026-01-17 00:59:01.603332 | orchestrator | 2026-01-17 00:59:01.603337 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-01-17 00:59:01.603341 | orchestrator | Saturday 17 January 2026 00:47:55 +0000 (0:00:01.062) 0:00:27.873 ****** 2026-01-17 00:59:01.603345 | orchestrator | skipping: [testbed-node-3] 2026-01-17 00:59:01.603350 | orchestrator | skipping: [testbed-node-4] 2026-01-17 00:59:01.603354 | orchestrator | skipping: [testbed-node-5] 2026-01-17 00:59:01.603359 | orchestrator | skipping: [testbed-node-0] 2026-01-17 00:59:01.603363 | orchestrator | skipping: [testbed-node-1] 2026-01-17 00:59:01.603367 | orchestrator | skipping: [testbed-node-2] 2026-01-17 00:59:01.603372 | orchestrator | 2026-01-17 00:59:01.603376 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-01-17 00:59:01.603380 | orchestrator | Saturday 17 January 2026 00:47:56 +0000 (0:00:01.321) 0:00:29.194 ****** 2026-01-17 00:59:01.603385 | orchestrator | skipping: [testbed-node-3] 2026-01-17 00:59:01.603389 | orchestrator | skipping: [testbed-node-4] 2026-01-17 00:59:01.603402 | orchestrator | skipping: [testbed-node-5] 2026-01-17 00:59:01.603407 | orchestrator | skipping: [testbed-node-0] 2026-01-17 00:59:01.603411 | orchestrator | skipping: [testbed-node-1] 2026-01-17 00:59:01.603415 | orchestrator | skipping: [testbed-node-2] 2026-01-17 00:59:01.603419 | orchestrator | 2026-01-17 00:59:01.603424 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-01-17 00:59:01.603428 | orchestrator | Saturday 17 January 2026 00:47:57 +0000 (0:00:00.994) 0:00:30.188 ****** 2026-01-17 00:59:01.603433 | orchestrator | skipping: [testbed-node-3] 2026-01-17 00:59:01.603437 | orchestrator | skipping: [testbed-node-4] 2026-01-17 00:59:01.603441 | orchestrator | skipping: [testbed-node-5] 2026-01-17 00:59:01.603445 | orchestrator | skipping: [testbed-node-0] 2026-01-17 00:59:01.603450 | orchestrator | skipping: [testbed-node-1] 2026-01-17 00:59:01.603454 | orchestrator | skipping: [testbed-node-2] 2026-01-17 00:59:01.603474 | orchestrator | 2026-01-17 00:59:01.603479 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-01-17 00:59:01.603483 | orchestrator | Saturday 17 January 2026 00:47:58 +0000 (0:00:00.575) 0:00:30.764 ****** 2026-01-17 00:59:01.603487 | orchestrator | skipping: [testbed-node-3] 2026-01-17 00:59:01.603504 | orchestrator | skipping: [testbed-node-4] 2026-01-17 00:59:01.603509 | orchestrator | skipping: [testbed-node-5] 2026-01-17 00:59:01.603522 | orchestrator | skipping: [testbed-node-0] 2026-01-17 00:59:01.603588 | orchestrator | skipping: [testbed-node-1] 2026-01-17 00:59:01.603594 | orchestrator | skipping: [testbed-node-2] 2026-01-17 00:59:01.603598 | orchestrator | 2026-01-17 00:59:01.603603 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-01-17 00:59:01.603607 | orchestrator | Saturday 17 January 2026 00:47:59 +0000 (0:00:00.800) 0:00:31.565 ****** 2026-01-17 00:59:01.603612 | orchestrator | skipping: [testbed-node-3] 2026-01-17 00:59:01.603616 | orchestrator | skipping: [testbed-node-4] 2026-01-17 00:59:01.603620 | orchestrator | skipping: [testbed-node-0] 2026-01-17 00:59:01.603624 | orchestrator | skipping: [testbed-node-5] 2026-01-17 00:59:01.603628 | orchestrator | skipping: [testbed-node-1] 2026-01-17 00:59:01.603633 | orchestrator | skipping: [testbed-node-2] 2026-01-17 00:59:01.603637 | orchestrator | 2026-01-17 00:59:01.603641 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-01-17 00:59:01.603646 | orchestrator | Saturday 17 January 2026 00:48:00 +0000 (0:00:00.989) 0:00:32.554 ****** 2026-01-17 00:59:01.603653 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--c5f49b22--d40f--5ab7--98f7--9762e23da2c0-osd--block--c5f49b22--d40f--5ab7--98f7--9762e23da2c0', 'dm-uuid-LVM-QaFsaK8PUscqv52QG7rZWQsM1ITbmCNtBg9UmwnkCU0TgTFpgJE46eQvvR1UIOjf'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-01-17 00:59:01.603658 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--2051e43b--6678--567a--85ad--b7e1187d21ae-osd--block--2051e43b--6678--567a--85ad--b7e1187d21ae', 'dm-uuid-LVM-Md7et7hVBu5ntN3bevHsnjBkleVswA1X1WLsiCL62gGz9fZiSf7sD18qnz4rPMBd'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-01-17 00:59:01.603679 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-17 00:59:01.603685 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-17 00:59:01.603690 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-17 00:59:01.603694 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-17 00:59:01.603702 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-17 00:59:01.603724 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-17 00:59:01.603730 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-17 00:59:01.603737 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-17 00:59:01.603747 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a81b0aae-ecd3-46bc-81d3-c119638f529b', 'scsi-SQEMU_QEMU_HARDDISK_a81b0aae-ecd3-46bc-81d3-c119638f529b'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a81b0aae-ecd3-46bc-81d3-c119638f529b-part1', 'scsi-SQEMU_QEMU_HARDDISK_a81b0aae-ecd3-46bc-81d3-c119638f529b-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a81b0aae-ecd3-46bc-81d3-c119638f529b-part14', 'scsi-SQEMU_QEMU_HARDDISK_a81b0aae-ecd3-46bc-81d3-c119638f529b-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a81b0aae-ecd3-46bc-81d3-c119638f529b-part15', 'scsi-SQEMU_QEMU_HARDDISK_a81b0aae-ecd3-46bc-81d3-c119638f529b-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a81b0aae-ecd3-46bc-81d3-c119638f529b-part16', 'scsi-SQEMU_QEMU_HARDDISK_a81b0aae-ecd3-46bc-81d3-c119638f529b-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-17 00:59:01.603756 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--6f2a493f--ee42--5e89--bc68--fb4f7dc1b165-osd--block--6f2a493f--ee42--5e89--bc68--fb4f7dc1b165', 'dm-uuid-LVM-nFFSrCL2nvETfTYSLcEWw2ku767Ad4TlanSeVjPGOYbtNyp2dOrEmthQag04Qlfw'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-01-17 00:59:01.603761 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'holders': ['ceph--c5f49b22--d40f--5ab7--98f7--9762e23da2c0-osd--block--c5f49b22--d40f--5ab7--98f7--9762e23da2c0'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-RMJI3b-05hW-7xpG-f9bN-7LlA-F5wA-8B2W4U', 'scsi-0QEMU_QEMU_HARDDISK_03c99a05-96d9-4471-aa9e-2837c3fbd541', 'scsi-SQEMU_QEMU_HARDDISK_03c99a05-96d9-4471-aa9e-2837c3fbd541'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-17 00:59:01.603767 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--fbc9b557--fafa--5136--b4c6--7d286dd557bb-osd--block--fbc9b557--fafa--5136--b4c6--7d286dd557bb', 'dm-uuid-LVM-tvbYC5qdW0xeFSGscnFgtuguYTc6vFsjyuP8eHblF9gDORksVycTX3WWlG9BStgP'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-01-17 00:59:01.603772 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'holders': ['ceph--2051e43b--6678--567a--85ad--b7e1187d21ae-osd--block--2051e43b--6678--567a--85ad--b7e1187d21ae'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-ci08xi-DLx5-qkHP-ts8o-o30q-GsMF-9Vu8DA', 'scsi-0QEMU_QEMU_HARDDISK_386eb8af-61b6-405b-8873-9456a29b0ccf', 'scsi-SQEMU_QEMU_HARDDISK_386eb8af-61b6-405b-8873-9456a29b0ccf'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-17 00:59:01.603780 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-17 00:59:01.603785 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_66cad329-aa8c-4366-8769-2bca3a7bcb41', 'scsi-SQEMU_QEMU_HARDDISK_66cad329-aa8c-4366-8769-2bca3a7bcb41'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-17 00:59:01.603792 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-17 00:59:01.603797 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-01-17-00-03-31-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-17 00:59:01.603802 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-17 00:59:01.603806 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-17 00:59:01.603812 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-17 00:59:01.603817 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-17 00:59:01.603822 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-17 00:59:01.603830 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-17 00:59:01.603836 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_40676af1-bb63-41c0-bff5-9ddc0a326d9b', 'scsi-SQEMU_QEMU_HARDDISK_40676af1-bb63-41c0-bff5-9ddc0a326d9b'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_40676af1-bb63-41c0-bff5-9ddc0a326d9b-part1', 'scsi-SQEMU_QEMU_HARDDISK_40676af1-bb63-41c0-bff5-9ddc0a326d9b-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_40676af1-bb63-41c0-bff5-9ddc0a326d9b-part14', 'scsi-SQEMU_QEMU_HARDDISK_40676af1-bb63-41c0-bff5-9ddc0a326d9b-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_40676af1-bb63-41c0-bff5-9ddc0a326d9b-part15', 'scsi-SQEMU_QEMU_HARDDISK_40676af1-bb63-41c0-bff5-9ddc0a326d9b-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_40676af1-bb63-41c0-bff5-9ddc0a326d9b-part16', 'scsi-SQEMU_QEMU_HARDDISK_40676af1-bb63-41c0-bff5-9ddc0a326d9b-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-17 00:59:01.603845 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'holders': ['ceph--6f2a493f--ee42--5e89--bc68--fb4f7dc1b165-osd--block--6f2a493f--ee42--5e89--bc68--fb4f7dc1b165'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-3KxdAO-CxAd-wUwe-i40h-hs1c-cSGa-f2Ve6g', 'scsi-0QEMU_QEMU_HARDDISK_89953a4d-629d-4187-87cb-8eaa4172afa2', 'scsi-SQEMU_QEMU_HARDDISK_89953a4d-629d-4187-87cb-8eaa4172afa2'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-17 00:59:01.603850 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'holders': ['ceph--fbc9b557--fafa--5136--b4c6--7d286dd557bb-osd--block--fbc9b557--fafa--5136--b4c6--7d286dd557bb'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-M9wEuU-Ap7d-T4FW-KZtp-Suyy-BaOI-zarCMP', 'scsi-0QEMU_QEMU_HARDDISK_bd9e2794-f462-41d3-bb22-ac4c4b73281f', 'scsi-SQEMU_QEMU_HARDDISK_bd9e2794-f462-41d3-bb22-ac4c4b73281f'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-17 00:59:01.603858 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1215eb05-d4be-4bfd-8c82-e464703dc233', 'scsi-SQEMU_QEMU_HARDDISK_1215eb05-d4be-4bfd-8c82-e464703dc233'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-17 00:59:01.603878 | orchestrator | skipping: [testbed-node-3] 2026-01-17 00:59:01.603883 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-01-17-00-03-33-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-17 00:59:01.603888 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--a3dfbdd8--de3c--56f7--9997--9a9b5f483001-osd--block--a3dfbdd8--de3c--56f7--9997--9a9b5f483001', 'dm-uuid-LVM-Hs7oUEeU8ADSWmx04CKn6SuMMp8eUWZStt7UHRd6e2EapFzMVikTSwSmjihiJjrs'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-01-17 00:59:01.603893 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--68934a0c--2b18--58d2--8851--459d4d664360-osd--block--68934a0c--2b18--58d2--8851--459d4d664360', 'dm-uuid-LVM-JzKU7Yaxauxxeo3x93Z5swIT25bbKFsjQssE989LuaK4h22b4I0YBNAYmVraKBR4'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-01-17 00:59:01.603898 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-17 00:59:01.603905 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-17 00:59:01.603921 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-17 00:59:01.603926 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-17 00:59:01.603958 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-17 00:59:01.603968 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-17 00:59:01.603975 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-17 00:59:01.603983 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-17 00:59:01.603995 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4a12610b-6fe3-4cad-9944-f8a257ec9d82', 'scsi-SQEMU_QEMU_HARDDISK_4a12610b-6fe3-4cad-9944-f8a257ec9d82'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4a12610b-6fe3-4cad-9944-f8a257ec9d82-part1', 'scsi-SQEMU_QEMU_HARDDISK_4a12610b-6fe3-4cad-9944-f8a257ec9d82-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4a12610b-6fe3-4cad-9944-f8a257ec9d82-part14', 'scsi-SQEMU_QEMU_HARDDISK_4a12610b-6fe3-4cad-9944-f8a257ec9d82-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4a12610b-6fe3-4cad-9944-f8a257ec9d82-part15', 'scsi-SQEMU_QEMU_HARDDISK_4a12610b-6fe3-4cad-9944-f8a257ec9d82-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4a12610b-6fe3-4cad-9944-f8a257ec9d82-part16', 'scsi-SQEMU_QEMU_HARDDISK_4a12610b-6fe3-4cad-9944-f8a257ec9d82-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-17 00:59:01.604016 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'holders': ['ceph--a3dfbdd8--de3c--56f7--9997--9a9b5f483001-osd--block--a3dfbdd8--de3c--56f7--9997--9a9b5f483001'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-bjzIvh-K4bB-3USD-1n7N-IeCp-1up8-m3jgq6', 'scsi-0QEMU_QEMU_HARDDISK_653651ff-f0c3-4f93-a415-b7bde2938506', 'scsi-SQEMU_QEMU_HARDDISK_653651ff-f0c3-4f93-a415-b7bde2938506'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-17 00:59:01.604030 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'holders': ['ceph--68934a0c--2b18--58d2--8851--459d4d664360-osd--block--68934a0c--2b18--58d2--8851--459d4d664360'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-sj7qFM-Ltli-Ke0E-lNxX-aEZ4-pO1J-ftJ1GB', 'scsi-0QEMU_QEMU_HARDDISK_3748448b-4cb4-41ff-a93c-c2a900d49ce0', 'scsi-SQEMU_QEMU_HARDDISK_3748448b-4cb4-41ff-a93c-c2a900d49ce0'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-17 00:59:01.604037 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b2725b1a-ab02-479a-b1d7-829717bc50e1', 'scsi-SQEMU_QEMU_HARDDISK_b2725b1a-ab02-479a-b1d7-829717bc50e1'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-17 00:59:01.604043 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-01-17-00-03-29-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-17 00:59:01.604050 | orchestrator | skipping: [testbed-node-4] 2026-01-17 00:59:01.604058 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-17 00:59:01.604069 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-17 00:59:01.604076 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-17 00:59:01.604084 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-17 00:59:01.604119 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-17 00:59:01.604127 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-17 00:59:01.604151 | orchestrator | skipping: [testbed-node-5] 2026-01-17 00:59:01.604159 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-17 00:59:01.604180 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-17 00:59:01.604192 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4c6ad37b-235a-42f0-84c6-49b8561a2d55', 'scsi-SQEMU_QEMU_HARDDISK_4c6ad37b-235a-42f0-84c6-49b8561a2d55'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4c6ad37b-235a-42f0-84c6-49b8561a2d55-part1', 'scsi-SQEMU_QEMU_HARDDISK_4c6ad37b-235a-42f0-84c6-49b8561a2d55-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4c6ad37b-235a-42f0-84c6-49b8561a2d55-part14', 'scsi-SQEMU_QEMU_HARDDISK_4c6ad37b-235a-42f0-84c6-49b8561a2d55-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4c6ad37b-235a-42f0-84c6-49b8561a2d55-part15', 'scsi-SQEMU_QEMU_HARDDISK_4c6ad37b-235a-42f0-84c6-49b8561a2d55-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4c6ad37b-235a-42f0-84c6-49b8561a2d55-part16', 'scsi-SQEMU_QEMU_HARDDISK_4c6ad37b-235a-42f0-84c6-49b8561a2d55-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-17 00:59:01.604241 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-01-17-00-02-58-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-17 00:59:01.604249 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-17 00:59:01.604257 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-17 00:59:01.604264 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-17 00:59:01.604272 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-17 00:59:01.604280 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-17 00:59:01.604287 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-17 00:59:01.604302 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-17 00:59:01.604309 | orchestrator | skipping: [testbed-node-0] 2026-01-17 00:59:01.604316 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-17 00:59:01.604333 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c932fa08-39f8-42bb-b31a-2bfdbc19349f', 'scsi-SQEMU_QEMU_HARDDISK_c932fa08-39f8-42bb-b31a-2bfdbc19349f'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c932fa08-39f8-42bb-b31a-2bfdbc19349f-part1', 'scsi-SQEMU_QEMU_HARDDISK_c932fa08-39f8-42bb-b31a-2bfdbc19349f-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c932fa08-39f8-42bb-b31a-2bfdbc19349f-part14', 'scsi-SQEMU_QEMU_HARDDISK_c932fa08-39f8-42bb-b31a-2bfdbc19349f-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c932fa08-39f8-42bb-b31a-2bfdbc19349f-part15', 'scsi-SQEMU_QEMU_HARDDISK_c932fa08-39f8-42bb-b31a-2bfdbc19349f-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c932fa08-39f8-42bb-b31a-2bfdbc19349f-part16', 'scsi-SQEMU_QEMU_HARDDISK_c932fa08-39f8-42bb-b31a-2bfdbc19349f-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-17 00:59:01.604341 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-01-17-00-03-04-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-17 00:59:01.604349 | orchestrator | skipping: [testbed-node-1] 2026-01-17 00:59:01.604356 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-17 00:59:01.604365 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-17 00:59:01.604373 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-17 00:59:01.604403 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-17 00:59:01.604421 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-17 00:59:01.604434 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-17 00:59:01.604446 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-17 00:59:01.604458 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-17 00:59:01.604475 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_26472a97-710b-416f-a0d1-a56c77a5a98a', 'scsi-SQEMU_QEMU_HARDDISK_26472a97-710b-416f-a0d1-a56c77a5a98a'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_26472a97-710b-416f-a0d1-a56c77a5a98a-part1', 'scsi-SQEMU_QEMU_HARDDISK_26472a97-710b-416f-a0d1-a56c77a5a98a-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_26472a97-710b-416f-a0d1-a56c77a5a98a-part14', 'scsi-SQEMU_QEMU_HARDDISK_26472a97-710b-416f-a0d1-a56c77a5a98a-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_26472a97-710b-416f-a0d1-a56c77a5a98a-part15', 'scsi-SQEMU_QEMU_HARDDISK_26472a97-710b-416f-a0d1-a56c77a5a98a-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_26472a97-710b-416f-a0d1-a56c77a5a98a-part16', 'scsi-SQEMU_QEMU_HARDDISK_26472a97-710b-416f-a0d1-a56c77a5a98a-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-17 00:59:01.604524 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-01-17-00-03-13-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-17 00:59:01.604533 | orchestrator | skipping: [testbed-node-2] 2026-01-17 00:59:01.604540 | orchestrator | 2026-01-17 00:59:01.604547 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-01-17 00:59:01.604555 | orchestrator | Saturday 17 January 2026 00:48:01 +0000 (0:00:01.225) 0:00:33.779 ****** 2026-01-17 00:59:01.604590 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--c5f49b22--d40f--5ab7--98f7--9762e23da2c0-osd--block--c5f49b22--d40f--5ab7--98f7--9762e23da2c0', 'dm-uuid-LVM-QaFsaK8PUscqv52QG7rZWQsM1ITbmCNtBg9UmwnkCU0TgTFpgJE46eQvvR1UIOjf'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-17 00:59:01.604600 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--2051e43b--6678--567a--85ad--b7e1187d21ae-osd--block--2051e43b--6678--567a--85ad--b7e1187d21ae', 'dm-uuid-LVM-Md7et7hVBu5ntN3bevHsnjBkleVswA1X1WLsiCL62gGz9fZiSf7sD18qnz4rPMBd'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-17 00:59:01.604608 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-17 00:59:01.604673 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-17 00:59:01.604689 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-17 00:59:01.604698 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-17 00:59:01.604703 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-17 00:59:01.604707 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-17 00:59:01.604712 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-17 00:59:01.604751 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-17 00:59:01.605459 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a81b0aae-ecd3-46bc-81d3-c119638f529b', 'scsi-SQEMU_QEMU_HARDDISK_a81b0aae-ecd3-46bc-81d3-c119638f529b'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a81b0aae-ecd3-46bc-81d3-c119638f529b-part1', 'scsi-SQEMU_QEMU_HARDDISK_a81b0aae-ecd3-46bc-81d3-c119638f529b-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a81b0aae-ecd3-46bc-81d3-c119638f529b-part14', 'scsi-SQEMU_QEMU_HARDDISK_a81b0aae-ecd3-46bc-81d3-c119638f529b-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a81b0aae-ecd3-46bc-81d3-c119638f529b-part15', 'scsi-SQEMU_QEMU_HARDDISK_a81b0aae-ecd3-46bc-81d3-c119638f529b-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a81b0aae-ecd3-46bc-81d3-c119638f529b-part16', 'scsi-SQEMU_QEMU_HARDDISK_a81b0aae-ecd3-46bc-81d3-c119638f529b-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-17 00:59:01.605478 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--c5f49b22--d40f--5ab7--98f7--9762e23da2c0-osd--block--c5f49b22--d40f--5ab7--98f7--9762e23da2c0'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-RMJI3b-05hW-7xpG-f9bN-7LlA-F5wA-8B2W4U', 'scsi-0QEMU_QEMU_HARDDISK_03c99a05-96d9-4471-aa9e-2837c3fbd541', 'scsi-SQEMU_QEMU_HARDDISK_03c99a05-96d9-4471-aa9e-2837c3fbd541'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-17 00:59:01.605486 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--6f2a493f--ee42--5e89--bc68--fb4f7dc1b165-osd--block--6f2a493f--ee42--5e89--bc68--fb4f7dc1b165', 'dm-uuid-LVM-nFFSrCL2nvETfTYSLcEWw2ku767Ad4TlanSeVjPGOYbtNyp2dOrEmthQag04Qlfw'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-17 00:59:01.605503 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--2051e43b--6678--567a--85ad--b7e1187d21ae-osd--block--2051e43b--6678--567a--85ad--b7e1187d21ae'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-ci08xi-DLx5-qkHP-ts8o-o30q-GsMF-9Vu8DA', 'scsi-0QEMU_QEMU_HARDDISK_386eb8af-61b6-405b-8873-9456a29b0ccf', 'scsi-SQEMU_QEMU_HARDDISK_386eb8af-61b6-405b-8873-9456a29b0ccf'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-17 00:59:01.605548 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_66cad329-aa8c-4366-8769-2bca3a7bcb41', 'scsi-SQEMU_QEMU_HARDDISK_66cad329-aa8c-4366-8769-2bca3a7bcb41'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-17 00:59:01.605558 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-01-17-00-03-31-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-17 00:59:01.605565 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--fbc9b557--fafa--5136--b4c6--7d286dd557bb-osd--block--fbc9b557--fafa--5136--b4c6--7d286dd557bb', 'dm-uuid-LVM-tvbYC5qdW0xeFSGscnFgtuguYTc6vFsjyuP8eHblF9gDORksVycTX3WWlG9BStgP'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-17 00:59:01.605572 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--a3dfbdd8--de3c--56f7--9997--9a9b5f483001-osd--block--a3dfbdd8--de3c--56f7--9997--9a9b5f483001', 'dm-uuid-LVM-Hs7oUEeU8ADSWmx04CKn6SuMMp8eUWZStt7UHRd6e2EapFzMVikTSwSmjihiJjrs'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-17 00:59:01.605586 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-17 00:59:01.605630 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--68934a0c--2b18--58d2--8851--459d4d664360-osd--block--68934a0c--2b18--58d2--8851--459d4d664360', 'dm-uuid-LVM-JzKU7Yaxauxxeo3x93Z5swIT25bbKFsjQssE989LuaK4h22b4I0YBNAYmVraKBR4'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-17 00:59:01.605638 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-17 00:59:01.605645 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-17 00:59:01.605652 | orchestrator | skipping: [testbed-node-3] 2026-01-17 00:59:01.605658 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-17 00:59:01.605664 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-17 00:59:01.605678 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-17 00:59:01.605684 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-17 00:59:01.605728 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-17 00:59:01.605737 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-17 00:59:01.605743 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-17 00:59:01.605750 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-17 00:59:01.605762 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-17 00:59:01.605772 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-17 00:59:01.605817 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4a12610b-6fe3-4cad-9944-f8a257ec9d82', 'scsi-SQEMU_QEMU_HARDDISK_4a12610b-6fe3-4cad-9944-f8a257ec9d82'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4a12610b-6fe3-4cad-9944-f8a257ec9d82-part1', 'scsi-SQEMU_QEMU_HARDDISK_4a12610b-6fe3-4cad-9944-f8a257ec9d82-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4a12610b-6fe3-4cad-9944-f8a257ec9d82-part14', 'scsi-SQEMU_QEMU_HARDDISK_4a12610b-6fe3-4cad-9944-f8a257ec9d82-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4a12610b-6fe3-4cad-9944-f8a257ec9d82-part15', 'scsi-SQEMU_QEMU_HARDDISK_4a12610b-6fe3-4cad-9944-f8a257ec9d82-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4a12610b-6fe3-4cad-9944-f8a257ec9d82-part16', 'scsi-SQEMU_QEMU_HARDDISK_4a12610b-6fe3-4cad-9944-f8a257ec9d82-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-17 00:59:01.605824 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-17 00:59:01.605832 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-17 00:59:01.605840 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--a3dfbdd8--de3c--56f7--9997--9a9b5f483001-osd--block--a3dfbdd8--de3c--56f7--9997--9a9b5f483001'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-bjzIvh-K4bB-3USD-1n7N-IeCp-1up8-m3jgq6', 'scsi-0QEMU_QEMU_HARDDISK_653651ff-f0c3-4f93-a415-b7bde2938506', 'scsi-SQEMU_QEMU_HARDDISK_653651ff-f0c3-4f93-a415-b7bde2938506'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-17 00:59:01.605903 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-17 00:59:01.605945 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--68934a0c--2b18--58d2--8851--459d4d664360-osd--block--68934a0c--2b18--58d2--8851--459d4d664360'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-sj7qFM-Ltli-Ke0E-lNxX-aEZ4-pO1J-ftJ1GB', 'scsi-0QEMU_QEMU_HARDDISK_3748448b-4cb4-41ff-a93c-c2a900d49ce0', 'scsi-SQEMU_QEMU_HARDDISK_3748448b-4cb4-41ff-a93c-c2a900d49ce0'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-17 00:59:01.605952 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-17 00:59:01.605959 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-17 00:59:01.605977 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b2725b1a-ab02-479a-b1d7-829717bc50e1', 'scsi-SQEMU_QEMU_HARDDISK_b2725b1a-ab02-479a-b1d7-829717bc50e1'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-17 00:59:01.605984 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-17 00:59:01.606059 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-17 00:59:01.606072 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-17 00:59:01.606079 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-17 00:59:01.606085 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-17 00:59:01.606098 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-17 00:59:01.606118 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-01-17-00-03-29-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-17 00:59:01.606163 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-17 00:59:01.606171 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-17 00:59:01.606178 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-17 00:59:01.606185 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-17 00:59:01.606197 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-17 00:59:01.606204 | orchestrator | skipping: [testbed-node-5] 2026-01-17 00:59:01.606214 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-17 00:59:01.606261 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_40676af1-bb63-41c0-bff5-9ddc0a326d9b', 'scsi-SQEMU_QEMU_HARDDISK_40676af1-bb63-41c0-bff5-9ddc0a326d9b'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_40676af1-bb63-41c0-bff5-9ddc0a326d9b-part1', 'scsi-SQEMU_QEMU_HARDDISK_40676af1-bb63-41c0-bff5-9ddc0a326d9b-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_40676af1-bb63-41c0-bff5-9ddc0a326d9b-part14', 'scsi-SQEMU_QEMU_HARDDISK_40676af1-bb63-41c0-bff5-9ddc0a326d9b-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_40676af1-bb63-41c0-bff5-9ddc0a326d9b-part15', 'scsi-SQEMU_QEMU_HARDDISK_40676af1-bb63-41c0-bff5-9ddc0a326d9b-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_40676af1-bb63-41c0-bff5-9ddc0a326d9b-part16', 'scsi-SQEMU_QEMU_HARDDISK_40676af1-bb63-41c0-bff5-9ddc0a326d9b-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-17 00:59:01.606276 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-17 00:59:01.606320 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c932fa08-39f8-42bb-b31a-2bfdbc19349f', 'scsi-SQEMU_QEMU_HARDDISK_c932fa08-39f8-42bb-b31a-2bfdbc19349f'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c932fa08-39f8-42bb-b31a-2bfdbc19349f-part1', 'scsi-SQEMU_QEMU_HARDDISK_c932fa08-39f8-42bb-b31a-2bfdbc19349f-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c932fa08-39f8-42bb-b31a-2bfdbc19349f-part14', 'scsi-SQEMU_QEMU_HARDDISK_c932fa08-39f8-42bb-b31a-2bfdbc19349f-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c932fa08-39f8-42bb-b31a-2bfdbc19349f-part15', 'scsi-SQEMU_QEMU_HARDDISK_c932fa08-39f8-42bb-b31a-2bfdbc19349f-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c932fa08-39f8-42bb-b31a-2bfdbc19349f-part16', 'scsi-SQEMU_QEMU_HARDDISK_c932fa08-39f8-42bb-b31a-2bfdbc19349f-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-17 00:59:01.606328 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-01-17-00-03-04-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-17 00:59:01.606335 | orchestrator | skipping: [testbed-node-1] 2026-01-17 00:59:01.606342 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-17 00:59:01.606353 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--6f2a493f--ee42--5e89--bc68--fb4f7dc1b165-osd--block--6f2a493f--ee42--5e89--bc68--fb4f7dc1b165'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-3KxdAO-CxAd-wUwe-i40h-hs1c-cSGa-f2Ve6g', 'scsi-0QEMU_QEMU_HARDDISK_89953a4d-629d-4187-87cb-8eaa4172afa2', 'scsi-SQEMU_QEMU_HARDDISK_89953a4d-629d-4187-87cb-8eaa4172afa2'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-17 00:59:01.606362 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-17 00:59:01.606369 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-17 00:59:01.606429 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-17 00:59:01.606438 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4c6ad37b-235a-42f0-84c6-49b8561a2d55', 'scsi-SQEMU_QEMU_HARDDISK_4c6ad37b-235a-42f0-84c6-49b8561a2d55'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4c6ad37b-235a-42f0-84c6-49b8561a2d55-part1', 'scsi-SQEMU_QEMU_HARDDISK_4c6ad37b-235a-42f0-84c6-49b8561a2d55-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4c6ad37b-235a-42f0-84c6-49b8561a2d55-part14', 'scsi-SQEMU_QEMU_HARDDISK_4c6ad37b-235a-42f0-84c6-49b8561a2d55-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4c6ad37b-235a-42f0-84c6-49b8561a2d55-part15', 'scsi-SQEMU_QEMU_HARDDISK_4c6ad37b-235a-42f0-84c6-49b8561a2d55-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4c6ad37b-235a-42f0-84c6-49b8561a2d55-part16', 'scsi-SQEMU_QEMU_HARDDISK_4c6ad37b-235a-42f0-84c6-49b8561a2d55-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-17 00:59:01.606452 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-17 00:59:01.606459 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-17 00:59:01.606499 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-17 00:59:01.606507 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-17 00:59:01.606518 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--fbc9b557--fafa--5136--b4c6--7d286dd557bb-osd--block--fbc9b557--fafa--5136--b4c6--7d286dd557bb'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-M9wEuU-Ap7d-T4FW-KZtp-Suyy-BaOI-zarCMP', 'scsi-0QEMU_QEMU_HARDDISK_bd9e2794-f462-41d3-bb22-ac4c4b73281f', 'scsi-SQEMU_QEMU_HARDDISK_bd9e2794-f462-41d3-bb22-ac4c4b73281f'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-17 00:59:01.606525 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-01-17-00-02-58-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-17 00:59:01.606535 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-17 00:59:01.606585 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_26472a97-710b-416f-a0d1-a56c77a5a98a', 'scsi-SQEMU_QEMU_HARDDISK_26472a97-710b-416f-a0d1-a56c77a5a98a'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_26472a97-710b-416f-a0d1-a56c77a5a98a-part1', 'scsi-SQEMU_QEMU_HARDDISK_26472a97-710b-416f-a0d1-a56c77a5a98a-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_26472a97-710b-416f-a0d1-a56c77a5a98a-part14', 'scsi-SQEMU_QEMU_HARDDISK_26472a97-710b-416f-a0d1-a56c77a5a98a-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_26472a97-710b-416f-a0d1-a56c77a5a98a-part15', 'scsi-SQEMU_QEMU_HARDDISK_26472a97-710b-416f-a0d1-a56c77a5a98a-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_26472a97-710b-416f-a0d1-a56c77a5a98a-part16', 'scsi-SQEMU_QEMU_HARDDISK_26472a97-710b-416f-a0d1-a56c77a5a98a-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-17 00:59:01.606598 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-01-17-00-03-13-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-17 00:59:01.606604 | orchestrator | skipping: [testbed-node-2] 2026-01-17 00:59:01.606611 | orchestrator | skipping: [testbed-node-0] 2026-01-17 00:59:01.606622 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1215eb05-d4be-4bfd-8c82-e464703dc233', 'scsi-SQEMU_QEMU_HARDDISK_1215eb05-d4be-4bfd-8c82-e464703dc233'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-17 00:59:01.606630 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-01-17-00-03-33-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-17 00:59:01.606636 | orchestrator | skipping: [testbed-node-4] 2026-01-17 00:59:01.606643 | orchestrator | 2026-01-17 00:59:01.606684 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-01-17 00:59:01.606693 | orchestrator | Saturday 17 January 2026 00:48:02 +0000 (0:00:01.249) 0:00:35.029 ****** 2026-01-17 00:59:01.606699 | orchestrator | ok: [testbed-node-4] 2026-01-17 00:59:01.606706 | orchestrator | ok: [testbed-node-3] 2026-01-17 00:59:01.606713 | orchestrator | ok: [testbed-node-5] 2026-01-17 00:59:01.606719 | orchestrator | ok: [testbed-node-1] 2026-01-17 00:59:01.606725 | orchestrator | ok: [testbed-node-0] 2026-01-17 00:59:01.606732 | orchestrator | ok: [testbed-node-2] 2026-01-17 00:59:01.606738 | orchestrator | 2026-01-17 00:59:01.606744 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-01-17 00:59:01.606755 | orchestrator | Saturday 17 January 2026 00:48:04 +0000 (0:00:01.804) 0:00:36.833 ****** 2026-01-17 00:59:01.606761 | orchestrator | ok: [testbed-node-3] 2026-01-17 00:59:01.606767 | orchestrator | ok: [testbed-node-4] 2026-01-17 00:59:01.606774 | orchestrator | ok: [testbed-node-5] 2026-01-17 00:59:01.606780 | orchestrator | ok: [testbed-node-0] 2026-01-17 00:59:01.606786 | orchestrator | ok: [testbed-node-1] 2026-01-17 00:59:01.606793 | orchestrator | ok: [testbed-node-2] 2026-01-17 00:59:01.606799 | orchestrator | 2026-01-17 00:59:01.606805 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-01-17 00:59:01.606811 | orchestrator | Saturday 17 January 2026 00:48:05 +0000 (0:00:00.722) 0:00:37.556 ****** 2026-01-17 00:59:01.606818 | orchestrator | skipping: [testbed-node-3] 2026-01-17 00:59:01.606824 | orchestrator | skipping: [testbed-node-4] 2026-01-17 00:59:01.606830 | orchestrator | skipping: [testbed-node-5] 2026-01-17 00:59:01.606845 | orchestrator | skipping: [testbed-node-0] 2026-01-17 00:59:01.606851 | orchestrator | skipping: [testbed-node-1] 2026-01-17 00:59:01.606858 | orchestrator | skipping: [testbed-node-2] 2026-01-17 00:59:01.606864 | orchestrator | 2026-01-17 00:59:01.606871 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-01-17 00:59:01.606877 | orchestrator | Saturday 17 January 2026 00:48:06 +0000 (0:00:01.021) 0:00:38.577 ****** 2026-01-17 00:59:01.606884 | orchestrator | skipping: [testbed-node-3] 2026-01-17 00:59:01.606890 | orchestrator | skipping: [testbed-node-5] 2026-01-17 00:59:01.606897 | orchestrator | skipping: [testbed-node-4] 2026-01-17 00:59:01.606901 | orchestrator | skipping: [testbed-node-0] 2026-01-17 00:59:01.606905 | orchestrator | skipping: [testbed-node-2] 2026-01-17 00:59:01.606927 | orchestrator | skipping: [testbed-node-1] 2026-01-17 00:59:01.606933 | orchestrator | 2026-01-17 00:59:01.606940 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-01-17 00:59:01.606956 | orchestrator | Saturday 17 January 2026 00:48:06 +0000 (0:00:00.853) 0:00:39.431 ****** 2026-01-17 00:59:01.606960 | orchestrator | skipping: [testbed-node-3] 2026-01-17 00:59:01.606964 | orchestrator | skipping: [testbed-node-4] 2026-01-17 00:59:01.606967 | orchestrator | skipping: [testbed-node-5] 2026-01-17 00:59:01.606971 | orchestrator | skipping: [testbed-node-0] 2026-01-17 00:59:01.606975 | orchestrator | skipping: [testbed-node-1] 2026-01-17 00:59:01.606978 | orchestrator | skipping: [testbed-node-2] 2026-01-17 00:59:01.606982 | orchestrator | 2026-01-17 00:59:01.606986 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-01-17 00:59:01.606990 | orchestrator | Saturday 17 January 2026 00:48:08 +0000 (0:00:01.678) 0:00:41.109 ****** 2026-01-17 00:59:01.606993 | orchestrator | skipping: [testbed-node-3] 2026-01-17 00:59:01.606997 | orchestrator | skipping: [testbed-node-4] 2026-01-17 00:59:01.607001 | orchestrator | skipping: [testbed-node-5] 2026-01-17 00:59:01.607004 | orchestrator | skipping: [testbed-node-0] 2026-01-17 00:59:01.607008 | orchestrator | skipping: [testbed-node-2] 2026-01-17 00:59:01.607012 | orchestrator | skipping: [testbed-node-1] 2026-01-17 00:59:01.607015 | orchestrator | 2026-01-17 00:59:01.607019 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-01-17 00:59:01.607023 | orchestrator | Saturday 17 January 2026 00:48:09 +0000 (0:00:00.925) 0:00:42.035 ****** 2026-01-17 00:59:01.607027 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-0) 2026-01-17 00:59:01.607030 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-0) 2026-01-17 00:59:01.607034 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-0) 2026-01-17 00:59:01.607038 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-1) 2026-01-17 00:59:01.607041 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-1) 2026-01-17 00:59:01.607045 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-2) 2026-01-17 00:59:01.607049 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-1) 2026-01-17 00:59:01.607052 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-01-17 00:59:01.607062 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-2) 2026-01-17 00:59:01.607066 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-2) 2026-01-17 00:59:01.607069 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-0) 2026-01-17 00:59:01.607073 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-1) 2026-01-17 00:59:01.607077 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-0) 2026-01-17 00:59:01.607080 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2026-01-17 00:59:01.607084 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-1) 2026-01-17 00:59:01.607088 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-2) 2026-01-17 00:59:01.607091 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-2) 2026-01-17 00:59:01.607095 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2026-01-17 00:59:01.607099 | orchestrator | 2026-01-17 00:59:01.607102 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-01-17 00:59:01.607106 | orchestrator | Saturday 17 January 2026 00:48:13 +0000 (0:00:04.416) 0:00:46.451 ****** 2026-01-17 00:59:01.607110 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-01-17 00:59:01.607115 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-01-17 00:59:01.607121 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-01-17 00:59:01.607128 | orchestrator | skipping: [testbed-node-3] 2026-01-17 00:59:01.607134 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2026-01-17 00:59:01.607140 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2026-01-17 00:59:01.607146 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2026-01-17 00:59:01.607153 | orchestrator | skipping: [testbed-node-4] 2026-01-17 00:59:01.607158 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2026-01-17 00:59:01.607210 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2026-01-17 00:59:01.607219 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2026-01-17 00:59:01.607225 | orchestrator | skipping: [testbed-node-5] 2026-01-17 00:59:01.607232 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-01-17 00:59:01.607239 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-01-17 00:59:01.607245 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-01-17 00:59:01.607252 | orchestrator | skipping: [testbed-node-0] 2026-01-17 00:59:01.607258 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2026-01-17 00:59:01.607265 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2026-01-17 00:59:01.607271 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2026-01-17 00:59:01.607278 | orchestrator | skipping: [testbed-node-1] 2026-01-17 00:59:01.607284 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2026-01-17 00:59:01.607290 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2026-01-17 00:59:01.607296 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2026-01-17 00:59:01.607303 | orchestrator | skipping: [testbed-node-2] 2026-01-17 00:59:01.607309 | orchestrator | 2026-01-17 00:59:01.607316 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-01-17 00:59:01.607322 | orchestrator | Saturday 17 January 2026 00:48:14 +0000 (0:00:00.910) 0:00:47.362 ****** 2026-01-17 00:59:01.607329 | orchestrator | skipping: [testbed-node-0] 2026-01-17 00:59:01.607335 | orchestrator | skipping: [testbed-node-1] 2026-01-17 00:59:01.607341 | orchestrator | skipping: [testbed-node-2] 2026-01-17 00:59:01.607348 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-17 00:59:01.607355 | orchestrator | 2026-01-17 00:59:01.607362 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-01-17 00:59:01.607369 | orchestrator | Saturday 17 January 2026 00:48:16 +0000 (0:00:01.169) 0:00:48.532 ****** 2026-01-17 00:59:01.607383 | orchestrator | skipping: [testbed-node-3] 2026-01-17 00:59:01.607390 | orchestrator | skipping: [testbed-node-4] 2026-01-17 00:59:01.607396 | orchestrator | skipping: [testbed-node-5] 2026-01-17 00:59:01.607402 | orchestrator | 2026-01-17 00:59:01.607408 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-01-17 00:59:01.607415 | orchestrator | Saturday 17 January 2026 00:48:16 +0000 (0:00:00.629) 0:00:49.161 ****** 2026-01-17 00:59:01.607422 | orchestrator | skipping: [testbed-node-3] 2026-01-17 00:59:01.607426 | orchestrator | skipping: [testbed-node-4] 2026-01-17 00:59:01.607430 | orchestrator | skipping: [testbed-node-5] 2026-01-17 00:59:01.607433 | orchestrator | 2026-01-17 00:59:01.607437 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-01-17 00:59:01.607441 | orchestrator | Saturday 17 January 2026 00:48:17 +0000 (0:00:00.586) 0:00:49.748 ****** 2026-01-17 00:59:01.607444 | orchestrator | skipping: [testbed-node-3] 2026-01-17 00:59:01.607448 | orchestrator | skipping: [testbed-node-4] 2026-01-17 00:59:01.607452 | orchestrator | skipping: [testbed-node-5] 2026-01-17 00:59:01.607455 | orchestrator | 2026-01-17 00:59:01.607459 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-01-17 00:59:01.607463 | orchestrator | Saturday 17 January 2026 00:48:18 +0000 (0:00:01.168) 0:00:50.917 ****** 2026-01-17 00:59:01.607467 | orchestrator | ok: [testbed-node-3] 2026-01-17 00:59:01.607471 | orchestrator | ok: [testbed-node-4] 2026-01-17 00:59:01.607474 | orchestrator | ok: [testbed-node-5] 2026-01-17 00:59:01.607478 | orchestrator | 2026-01-17 00:59:01.607482 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-01-17 00:59:01.607485 | orchestrator | Saturday 17 January 2026 00:48:18 +0000 (0:00:00.464) 0:00:51.381 ****** 2026-01-17 00:59:01.607489 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-01-17 00:59:01.607493 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-01-17 00:59:01.607497 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-01-17 00:59:01.607500 | orchestrator | skipping: [testbed-node-3] 2026-01-17 00:59:01.607504 | orchestrator | 2026-01-17 00:59:01.607511 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-01-17 00:59:01.607515 | orchestrator | Saturday 17 January 2026 00:48:19 +0000 (0:00:00.429) 0:00:51.811 ****** 2026-01-17 00:59:01.607519 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-01-17 00:59:01.607523 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-01-17 00:59:01.607526 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-01-17 00:59:01.607530 | orchestrator | skipping: [testbed-node-3] 2026-01-17 00:59:01.607534 | orchestrator | 2026-01-17 00:59:01.607537 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-01-17 00:59:01.607541 | orchestrator | Saturday 17 January 2026 00:48:19 +0000 (0:00:00.375) 0:00:52.186 ****** 2026-01-17 00:59:01.607545 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-01-17 00:59:01.607548 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-01-17 00:59:01.607552 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-01-17 00:59:01.607556 | orchestrator | skipping: [testbed-node-3] 2026-01-17 00:59:01.607560 | orchestrator | 2026-01-17 00:59:01.607563 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-01-17 00:59:01.607567 | orchestrator | Saturday 17 January 2026 00:48:20 +0000 (0:00:00.474) 0:00:52.661 ****** 2026-01-17 00:59:01.607571 | orchestrator | ok: [testbed-node-3] 2026-01-17 00:59:01.607574 | orchestrator | ok: [testbed-node-4] 2026-01-17 00:59:01.607578 | orchestrator | ok: [testbed-node-5] 2026-01-17 00:59:01.607582 | orchestrator | 2026-01-17 00:59:01.607586 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-01-17 00:59:01.607589 | orchestrator | Saturday 17 January 2026 00:48:20 +0000 (0:00:00.584) 0:00:53.245 ****** 2026-01-17 00:59:01.607593 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-01-17 00:59:01.607597 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-01-17 00:59:01.607628 | orchestrator | ok: [testbed-node-5] => (item=0) 2026-01-17 00:59:01.607635 | orchestrator | 2026-01-17 00:59:01.607641 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-01-17 00:59:01.607648 | orchestrator | Saturday 17 January 2026 00:48:21 +0000 (0:00:00.971) 0:00:54.217 ****** 2026-01-17 00:59:01.607654 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-01-17 00:59:01.607660 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-01-17 00:59:01.607667 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-01-17 00:59:01.607675 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-01-17 00:59:01.607682 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-01-17 00:59:01.607688 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-01-17 00:59:01.607695 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-01-17 00:59:01.607700 | orchestrator | 2026-01-17 00:59:01.607704 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-01-17 00:59:01.607708 | orchestrator | Saturday 17 January 2026 00:48:22 +0000 (0:00:00.821) 0:00:55.038 ****** 2026-01-17 00:59:01.607711 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-01-17 00:59:01.607715 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-01-17 00:59:01.607719 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-01-17 00:59:01.607723 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-01-17 00:59:01.607726 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-01-17 00:59:01.607730 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-01-17 00:59:01.607734 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-01-17 00:59:01.607737 | orchestrator | 2026-01-17 00:59:01.607741 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-01-17 00:59:01.607745 | orchestrator | Saturday 17 January 2026 00:48:24 +0000 (0:00:01.675) 0:00:56.713 ****** 2026-01-17 00:59:01.607749 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-17 00:59:01.607753 | orchestrator | 2026-01-17 00:59:01.607757 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-01-17 00:59:01.607761 | orchestrator | Saturday 17 January 2026 00:48:25 +0000 (0:00:01.233) 0:00:57.947 ****** 2026-01-17 00:59:01.607765 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-17 00:59:01.607769 | orchestrator | 2026-01-17 00:59:01.607772 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-01-17 00:59:01.607776 | orchestrator | Saturday 17 January 2026 00:48:26 +0000 (0:00:01.411) 0:00:59.358 ****** 2026-01-17 00:59:01.607780 | orchestrator | skipping: [testbed-node-3] 2026-01-17 00:59:01.607784 | orchestrator | skipping: [testbed-node-4] 2026-01-17 00:59:01.607787 | orchestrator | skipping: [testbed-node-5] 2026-01-17 00:59:01.607791 | orchestrator | ok: [testbed-node-0] 2026-01-17 00:59:01.607795 | orchestrator | ok: [testbed-node-1] 2026-01-17 00:59:01.607798 | orchestrator | ok: [testbed-node-2] 2026-01-17 00:59:01.607802 | orchestrator | 2026-01-17 00:59:01.607806 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-01-17 00:59:01.607810 | orchestrator | Saturday 17 January 2026 00:48:28 +0000 (0:00:01.558) 0:01:00.917 ****** 2026-01-17 00:59:01.607813 | orchestrator | skipping: [testbed-node-0] 2026-01-17 00:59:01.607822 | orchestrator | skipping: [testbed-node-1] 2026-01-17 00:59:01.607829 | orchestrator | skipping: [testbed-node-2] 2026-01-17 00:59:01.607833 | orchestrator | ok: [testbed-node-3] 2026-01-17 00:59:01.607839 | orchestrator | ok: [testbed-node-4] 2026-01-17 00:59:01.607846 | orchestrator | ok: [testbed-node-5] 2026-01-17 00:59:01.607853 | orchestrator | 2026-01-17 00:59:01.607860 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-01-17 00:59:01.607867 | orchestrator | Saturday 17 January 2026 00:48:29 +0000 (0:00:01.199) 0:01:02.117 ****** 2026-01-17 00:59:01.607873 | orchestrator | skipping: [testbed-node-0] 2026-01-17 00:59:01.607880 | orchestrator | ok: [testbed-node-3] 2026-01-17 00:59:01.607884 | orchestrator | ok: [testbed-node-4] 2026-01-17 00:59:01.607888 | orchestrator | skipping: [testbed-node-1] 2026-01-17 00:59:01.607894 | orchestrator | ok: [testbed-node-5] 2026-01-17 00:59:01.607901 | orchestrator | skipping: [testbed-node-2] 2026-01-17 00:59:01.607923 | orchestrator | 2026-01-17 00:59:01.607931 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-01-17 00:59:01.607938 | orchestrator | Saturday 17 January 2026 00:48:30 +0000 (0:00:01.005) 0:01:03.122 ****** 2026-01-17 00:59:01.607945 | orchestrator | ok: [testbed-node-3] 2026-01-17 00:59:01.607952 | orchestrator | skipping: [testbed-node-0] 2026-01-17 00:59:01.607959 | orchestrator | ok: [testbed-node-4] 2026-01-17 00:59:01.607965 | orchestrator | skipping: [testbed-node-1] 2026-01-17 00:59:01.607969 | orchestrator | skipping: [testbed-node-2] 2026-01-17 00:59:01.607973 | orchestrator | ok: [testbed-node-5] 2026-01-17 00:59:01.607977 | orchestrator | 2026-01-17 00:59:01.607981 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-01-17 00:59:01.607984 | orchestrator | Saturday 17 January 2026 00:48:31 +0000 (0:00:01.001) 0:01:04.124 ****** 2026-01-17 00:59:01.607988 | orchestrator | skipping: [testbed-node-3] 2026-01-17 00:59:01.607992 | orchestrator | skipping: [testbed-node-4] 2026-01-17 00:59:01.607996 | orchestrator | skipping: [testbed-node-5] 2026-01-17 00:59:01.607999 | orchestrator | ok: [testbed-node-0] 2026-01-17 00:59:01.608003 | orchestrator | ok: [testbed-node-1] 2026-01-17 00:59:01.608029 | orchestrator | ok: [testbed-node-2] 2026-01-17 00:59:01.608035 | orchestrator | 2026-01-17 00:59:01.608042 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-01-17 00:59:01.608048 | orchestrator | Saturday 17 January 2026 00:48:33 +0000 (0:00:01.442) 0:01:05.566 ****** 2026-01-17 00:59:01.608055 | orchestrator | skipping: [testbed-node-3] 2026-01-17 00:59:01.608061 | orchestrator | skipping: [testbed-node-4] 2026-01-17 00:59:01.608067 | orchestrator | skipping: [testbed-node-5] 2026-01-17 00:59:01.608070 | orchestrator | skipping: [testbed-node-0] 2026-01-17 00:59:01.608074 | orchestrator | skipping: [testbed-node-1] 2026-01-17 00:59:01.608078 | orchestrator | skipping: [testbed-node-2] 2026-01-17 00:59:01.608082 | orchestrator | 2026-01-17 00:59:01.608086 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-01-17 00:59:01.608090 | orchestrator | Saturday 17 January 2026 00:48:34 +0000 (0:00:01.037) 0:01:06.604 ****** 2026-01-17 00:59:01.608093 | orchestrator | skipping: [testbed-node-3] 2026-01-17 00:59:01.608097 | orchestrator | skipping: [testbed-node-4] 2026-01-17 00:59:01.608101 | orchestrator | skipping: [testbed-node-5] 2026-01-17 00:59:01.608106 | orchestrator | skipping: [testbed-node-0] 2026-01-17 00:59:01.608113 | orchestrator | skipping: [testbed-node-1] 2026-01-17 00:59:01.608120 | orchestrator | skipping: [testbed-node-2] 2026-01-17 00:59:01.608126 | orchestrator | 2026-01-17 00:59:01.608133 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-01-17 00:59:01.608139 | orchestrator | Saturday 17 January 2026 00:48:35 +0000 (0:00:01.675) 0:01:08.279 ****** 2026-01-17 00:59:01.608146 | orchestrator | ok: [testbed-node-3] 2026-01-17 00:59:01.608150 | orchestrator | ok: [testbed-node-4] 2026-01-17 00:59:01.608154 | orchestrator | ok: [testbed-node-5] 2026-01-17 00:59:01.608158 | orchestrator | ok: [testbed-node-0] 2026-01-17 00:59:01.608162 | orchestrator | ok: [testbed-node-1] 2026-01-17 00:59:01.608169 | orchestrator | ok: [testbed-node-2] 2026-01-17 00:59:01.608173 | orchestrator | 2026-01-17 00:59:01.608177 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-01-17 00:59:01.608180 | orchestrator | Saturday 17 January 2026 00:48:38 +0000 (0:00:02.416) 0:01:10.696 ****** 2026-01-17 00:59:01.608184 | orchestrator | ok: [testbed-node-3] 2026-01-17 00:59:01.608188 | orchestrator | ok: [testbed-node-4] 2026-01-17 00:59:01.608192 | orchestrator | ok: [testbed-node-0] 2026-01-17 00:59:01.608195 | orchestrator | ok: [testbed-node-5] 2026-01-17 00:59:01.608199 | orchestrator | ok: [testbed-node-1] 2026-01-17 00:59:01.608203 | orchestrator | ok: [testbed-node-2] 2026-01-17 00:59:01.608207 | orchestrator | 2026-01-17 00:59:01.608214 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-01-17 00:59:01.608220 | orchestrator | Saturday 17 January 2026 00:48:39 +0000 (0:00:01.657) 0:01:12.354 ****** 2026-01-17 00:59:01.608227 | orchestrator | skipping: [testbed-node-3] 2026-01-17 00:59:01.608234 | orchestrator | skipping: [testbed-node-4] 2026-01-17 00:59:01.608238 | orchestrator | skipping: [testbed-node-5] 2026-01-17 00:59:01.608242 | orchestrator | skipping: [testbed-node-0] 2026-01-17 00:59:01.608245 | orchestrator | skipping: [testbed-node-1] 2026-01-17 00:59:01.608249 | orchestrator | skipping: [testbed-node-2] 2026-01-17 00:59:01.608253 | orchestrator | 2026-01-17 00:59:01.608257 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-01-17 00:59:01.608261 | orchestrator | Saturday 17 January 2026 00:48:41 +0000 (0:00:01.498) 0:01:13.852 ****** 2026-01-17 00:59:01.608264 | orchestrator | skipping: [testbed-node-3] 2026-01-17 00:59:01.608268 | orchestrator | skipping: [testbed-node-4] 2026-01-17 00:59:01.608272 | orchestrator | skipping: [testbed-node-5] 2026-01-17 00:59:01.608276 | orchestrator | ok: [testbed-node-0] 2026-01-17 00:59:01.608280 | orchestrator | ok: [testbed-node-1] 2026-01-17 00:59:01.608286 | orchestrator | ok: [testbed-node-2] 2026-01-17 00:59:01.608292 | orchestrator | 2026-01-17 00:59:01.608298 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-01-17 00:59:01.608304 | orchestrator | Saturday 17 January 2026 00:48:43 +0000 (0:00:01.686) 0:01:15.539 ****** 2026-01-17 00:59:01.608310 | orchestrator | ok: [testbed-node-3] 2026-01-17 00:59:01.608316 | orchestrator | ok: [testbed-node-4] 2026-01-17 00:59:01.608323 | orchestrator | ok: [testbed-node-5] 2026-01-17 00:59:01.608330 | orchestrator | skipping: [testbed-node-0] 2026-01-17 00:59:01.608336 | orchestrator | skipping: [testbed-node-1] 2026-01-17 00:59:01.608343 | orchestrator | skipping: [testbed-node-2] 2026-01-17 00:59:01.608347 | orchestrator | 2026-01-17 00:59:01.608351 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-01-17 00:59:01.608355 | orchestrator | Saturday 17 January 2026 00:48:44 +0000 (0:00:01.024) 0:01:16.563 ****** 2026-01-17 00:59:01.608359 | orchestrator | ok: [testbed-node-3] 2026-01-17 00:59:01.608365 | orchestrator | ok: [testbed-node-4] 2026-01-17 00:59:01.608369 | orchestrator | ok: [testbed-node-5] 2026-01-17 00:59:01.608373 | orchestrator | skipping: [testbed-node-0] 2026-01-17 00:59:01.608377 | orchestrator | skipping: [testbed-node-1] 2026-01-17 00:59:01.608380 | orchestrator | skipping: [testbed-node-2] 2026-01-17 00:59:01.608384 | orchestrator | 2026-01-17 00:59:01.608388 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-01-17 00:59:01.608392 | orchestrator | Saturday 17 January 2026 00:48:44 +0000 (0:00:00.797) 0:01:17.361 ****** 2026-01-17 00:59:01.608398 | orchestrator | ok: [testbed-node-3] 2026-01-17 00:59:01.608405 | orchestrator | ok: [testbed-node-4] 2026-01-17 00:59:01.608411 | orchestrator | ok: [testbed-node-5] 2026-01-17 00:59:01.608418 | orchestrator | skipping: [testbed-node-0] 2026-01-17 00:59:01.608423 | orchestrator | skipping: [testbed-node-1] 2026-01-17 00:59:01.608427 | orchestrator | skipping: [testbed-node-2] 2026-01-17 00:59:01.608431 | orchestrator | 2026-01-17 00:59:01.608435 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-01-17 00:59:01.608439 | orchestrator | Saturday 17 January 2026 00:48:45 +0000 (0:00:00.626) 0:01:17.988 ****** 2026-01-17 00:59:01.608448 | orchestrator | skipping: [testbed-node-3] 2026-01-17 00:59:01.608455 | orchestrator | skipping: [testbed-node-4] 2026-01-17 00:59:01.608462 | orchestrator | skipping: [testbed-node-5] 2026-01-17 00:59:01.608468 | orchestrator | skipping: [testbed-node-0] 2026-01-17 00:59:01.608475 | orchestrator | skipping: [testbed-node-1] 2026-01-17 00:59:01.608480 | orchestrator | skipping: [testbed-node-2] 2026-01-17 00:59:01.608484 | orchestrator | 2026-01-17 00:59:01.608488 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-01-17 00:59:01.608491 | orchestrator | Saturday 17 January 2026 00:48:46 +0000 (0:00:00.663) 0:01:18.651 ****** 2026-01-17 00:59:01.608495 | orchestrator | skipping: [testbed-node-3] 2026-01-17 00:59:01.608499 | orchestrator | skipping: [testbed-node-4] 2026-01-17 00:59:01.608503 | orchestrator | skipping: [testbed-node-5] 2026-01-17 00:59:01.608506 | orchestrator | skipping: [testbed-node-0] 2026-01-17 00:59:01.608526 | orchestrator | skipping: [testbed-node-2] 2026-01-17 00:59:01.608531 | orchestrator | skipping: [testbed-node-1] 2026-01-17 00:59:01.608534 | orchestrator | 2026-01-17 00:59:01.608538 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-01-17 00:59:01.608542 | orchestrator | Saturday 17 January 2026 00:48:46 +0000 (0:00:00.688) 0:01:19.340 ****** 2026-01-17 00:59:01.608545 | orchestrator | skipping: [testbed-node-3] 2026-01-17 00:59:01.608549 | orchestrator | skipping: [testbed-node-4] 2026-01-17 00:59:01.608553 | orchestrator | skipping: [testbed-node-5] 2026-01-17 00:59:01.608556 | orchestrator | ok: [testbed-node-0] 2026-01-17 00:59:01.608560 | orchestrator | ok: [testbed-node-1] 2026-01-17 00:59:01.608564 | orchestrator | ok: [testbed-node-2] 2026-01-17 00:59:01.608567 | orchestrator | 2026-01-17 00:59:01.608571 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-01-17 00:59:01.608575 | orchestrator | Saturday 17 January 2026 00:48:47 +0000 (0:00:00.868) 0:01:20.209 ****** 2026-01-17 00:59:01.608579 | orchestrator | ok: [testbed-node-3] 2026-01-17 00:59:01.608582 | orchestrator | ok: [testbed-node-4] 2026-01-17 00:59:01.608586 | orchestrator | ok: [testbed-node-5] 2026-01-17 00:59:01.608590 | orchestrator | ok: [testbed-node-0] 2026-01-17 00:59:01.608593 | orchestrator | ok: [testbed-node-1] 2026-01-17 00:59:01.608597 | orchestrator | ok: [testbed-node-2] 2026-01-17 00:59:01.608601 | orchestrator | 2026-01-17 00:59:01.608604 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-01-17 00:59:01.608608 | orchestrator | Saturday 17 January 2026 00:48:48 +0000 (0:00:00.967) 0:01:21.176 ****** 2026-01-17 00:59:01.608612 | orchestrator | ok: [testbed-node-3] 2026-01-17 00:59:01.608615 | orchestrator | ok: [testbed-node-4] 2026-01-17 00:59:01.608619 | orchestrator | ok: [testbed-node-5] 2026-01-17 00:59:01.608623 | orchestrator | ok: [testbed-node-0] 2026-01-17 00:59:01.608626 | orchestrator | ok: [testbed-node-1] 2026-01-17 00:59:01.608630 | orchestrator | ok: [testbed-node-2] 2026-01-17 00:59:01.608634 | orchestrator | 2026-01-17 00:59:01.608637 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2026-01-17 00:59:01.608641 | orchestrator | Saturday 17 January 2026 00:48:50 +0000 (0:00:01.511) 0:01:22.687 ****** 2026-01-17 00:59:01.608645 | orchestrator | changed: [testbed-node-4] 2026-01-17 00:59:01.608648 | orchestrator | changed: [testbed-node-5] 2026-01-17 00:59:01.608652 | orchestrator | changed: [testbed-node-3] 2026-01-17 00:59:01.608656 | orchestrator | changed: [testbed-node-0] 2026-01-17 00:59:01.608659 | orchestrator | changed: [testbed-node-2] 2026-01-17 00:59:01.608663 | orchestrator | changed: [testbed-node-1] 2026-01-17 00:59:01.608667 | orchestrator | 2026-01-17 00:59:01.608671 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2026-01-17 00:59:01.608674 | orchestrator | Saturday 17 January 2026 00:48:52 +0000 (0:00:02.112) 0:01:24.799 ****** 2026-01-17 00:59:01.608678 | orchestrator | changed: [testbed-node-1] 2026-01-17 00:59:01.608681 | orchestrator | changed: [testbed-node-3] 2026-01-17 00:59:01.608685 | orchestrator | changed: [testbed-node-0] 2026-01-17 00:59:01.608689 | orchestrator | changed: [testbed-node-5] 2026-01-17 00:59:01.608696 | orchestrator | changed: [testbed-node-2] 2026-01-17 00:59:01.608699 | orchestrator | changed: [testbed-node-4] 2026-01-17 00:59:01.608703 | orchestrator | 2026-01-17 00:59:01.608708 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2026-01-17 00:59:01.608715 | orchestrator | Saturday 17 January 2026 00:48:55 +0000 (0:00:03.125) 0:01:27.925 ****** 2026-01-17 00:59:01.608722 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-17 00:59:01.608729 | orchestrator | 2026-01-17 00:59:01.608736 | orchestrator | TASK [ceph-container-common : Stop lvmetad] ************************************ 2026-01-17 00:59:01.608740 | orchestrator | Saturday 17 January 2026 00:48:56 +0000 (0:00:01.414) 0:01:29.340 ****** 2026-01-17 00:59:01.608744 | orchestrator | skipping: [testbed-node-3] 2026-01-17 00:59:01.608747 | orchestrator | skipping: [testbed-node-4] 2026-01-17 00:59:01.608752 | orchestrator | skipping: [testbed-node-5] 2026-01-17 00:59:01.608758 | orchestrator | skipping: [testbed-node-0] 2026-01-17 00:59:01.608765 | orchestrator | skipping: [testbed-node-1] 2026-01-17 00:59:01.608772 | orchestrator | skipping: [testbed-node-2] 2026-01-17 00:59:01.608778 | orchestrator | 2026-01-17 00:59:01.608785 | orchestrator | TASK [ceph-container-common : Disable and mask lvmetad service] **************** 2026-01-17 00:59:01.608789 | orchestrator | Saturday 17 January 2026 00:48:57 +0000 (0:00:00.591) 0:01:29.931 ****** 2026-01-17 00:59:01.608793 | orchestrator | skipping: [testbed-node-3] 2026-01-17 00:59:01.608799 | orchestrator | skipping: [testbed-node-4] 2026-01-17 00:59:01.608803 | orchestrator | skipping: [testbed-node-5] 2026-01-17 00:59:01.608807 | orchestrator | skipping: [testbed-node-0] 2026-01-17 00:59:01.608810 | orchestrator | skipping: [testbed-node-1] 2026-01-17 00:59:01.608816 | orchestrator | skipping: [testbed-node-2] 2026-01-17 00:59:01.608822 | orchestrator | 2026-01-17 00:59:01.608829 | orchestrator | TASK [ceph-container-common : Remove ceph udev rules] ************************** 2026-01-17 00:59:01.608835 | orchestrator | Saturday 17 January 2026 00:48:58 +0000 (0:00:00.844) 0:01:30.776 ****** 2026-01-17 00:59:01.608842 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-01-17 00:59:01.608847 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-01-17 00:59:01.608851 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-01-17 00:59:01.608855 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-01-17 00:59:01.608858 | orchestrator | ok: [testbed-node-1] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-01-17 00:59:01.608862 | orchestrator | ok: [testbed-node-2] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-01-17 00:59:01.608866 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-01-17 00:59:01.608870 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-01-17 00:59:01.608875 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-01-17 00:59:01.608882 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-01-17 00:59:01.608917 | orchestrator | ok: [testbed-node-1] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-01-17 00:59:01.608923 | orchestrator | ok: [testbed-node-2] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-01-17 00:59:01.608927 | orchestrator | 2026-01-17 00:59:01.608930 | orchestrator | TASK [ceph-container-common : Ensure tmpfiles.d is present] ******************** 2026-01-17 00:59:01.608934 | orchestrator | Saturday 17 January 2026 00:48:59 +0000 (0:00:01.429) 0:01:32.206 ****** 2026-01-17 00:59:01.608938 | orchestrator | changed: [testbed-node-4] 2026-01-17 00:59:01.608942 | orchestrator | changed: [testbed-node-3] 2026-01-17 00:59:01.608945 | orchestrator | changed: [testbed-node-5] 2026-01-17 00:59:01.608949 | orchestrator | changed: [testbed-node-0] 2026-01-17 00:59:01.608953 | orchestrator | changed: [testbed-node-1] 2026-01-17 00:59:01.608960 | orchestrator | changed: [testbed-node-2] 2026-01-17 00:59:01.608964 | orchestrator | 2026-01-17 00:59:01.608968 | orchestrator | TASK [ceph-container-common : Restore certificates selinux context] ************ 2026-01-17 00:59:01.608974 | orchestrator | Saturday 17 January 2026 00:49:00 +0000 (0:00:01.267) 0:01:33.474 ****** 2026-01-17 00:59:01.608981 | orchestrator | skipping: [testbed-node-3] 2026-01-17 00:59:01.608988 | orchestrator | skipping: [testbed-node-4] 2026-01-17 00:59:01.608995 | orchestrator | skipping: [testbed-node-5] 2026-01-17 00:59:01.609002 | orchestrator | skipping: [testbed-node-0] 2026-01-17 00:59:01.609006 | orchestrator | skipping: [testbed-node-1] 2026-01-17 00:59:01.609010 | orchestrator | skipping: [testbed-node-2] 2026-01-17 00:59:01.609013 | orchestrator | 2026-01-17 00:59:01.609017 | orchestrator | TASK [ceph-container-common : Install python3 on osd nodes] ******************** 2026-01-17 00:59:01.609021 | orchestrator | Saturday 17 January 2026 00:49:01 +0000 (0:00:00.694) 0:01:34.168 ****** 2026-01-17 00:59:01.609025 | orchestrator | skipping: [testbed-node-3] 2026-01-17 00:59:01.609028 | orchestrator | skipping: [testbed-node-4] 2026-01-17 00:59:01.609034 | orchestrator | skipping: [testbed-node-5] 2026-01-17 00:59:01.609041 | orchestrator | skipping: [testbed-node-0] 2026-01-17 00:59:01.609047 | orchestrator | skipping: [testbed-node-1] 2026-01-17 00:59:01.609054 | orchestrator | skipping: [testbed-node-2] 2026-01-17 00:59:01.609060 | orchestrator | 2026-01-17 00:59:01.609066 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2026-01-17 00:59:01.609070 | orchestrator | Saturday 17 January 2026 00:49:02 +0000 (0:00:01.183) 0:01:35.352 ****** 2026-01-17 00:59:01.609074 | orchestrator | skipping: [testbed-node-3] 2026-01-17 00:59:01.609078 | orchestrator | skipping: [testbed-node-4] 2026-01-17 00:59:01.609082 | orchestrator | skipping: [testbed-node-5] 2026-01-17 00:59:01.609085 | orchestrator | skipping: [testbed-node-0] 2026-01-17 00:59:01.609089 | orchestrator | skipping: [testbed-node-1] 2026-01-17 00:59:01.609095 | orchestrator | skipping: [testbed-node-2] 2026-01-17 00:59:01.609101 | orchestrator | 2026-01-17 00:59:01.609108 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2026-01-17 00:59:01.609114 | orchestrator | Saturday 17 January 2026 00:49:03 +0000 (0:00:00.769) 0:01:36.121 ****** 2026-01-17 00:59:01.609120 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-17 00:59:01.609124 | orchestrator | 2026-01-17 00:59:01.609128 | orchestrator | TASK [ceph-container-common : Pulling Ceph container image] ******************** 2026-01-17 00:59:01.609132 | orchestrator | Saturday 17 January 2026 00:49:05 +0000 (0:00:01.587) 0:01:37.709 ****** 2026-01-17 00:59:01.609136 | orchestrator | ok: [testbed-node-1] 2026-01-17 00:59:01.609139 | orchestrator | ok: [testbed-node-0] 2026-01-17 00:59:01.609143 | orchestrator | ok: [testbed-node-5] 2026-01-17 00:59:01.609147 | orchestrator | ok: [testbed-node-3] 2026-01-17 00:59:01.609151 | orchestrator | ok: [testbed-node-4] 2026-01-17 00:59:01.609154 | orchestrator | ok: [testbed-node-2] 2026-01-17 00:59:01.609158 | orchestrator | 2026-01-17 00:59:01.609162 | orchestrator | TASK [ceph-container-common : Pulling alertmanager/prometheus/grafana container images] *** 2026-01-17 00:59:01.609166 | orchestrator | Saturday 17 January 2026 00:49:54 +0000 (0:00:49.669) 0:02:27.378 ****** 2026-01-17 00:59:01.609170 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-01-17 00:59:01.609173 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/prometheus:v2.7.2)  2026-01-17 00:59:01.609177 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/grafana/grafana:6.7.4)  2026-01-17 00:59:01.609186 | orchestrator | skipping: [testbed-node-3] 2026-01-17 00:59:01.609190 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-01-17 00:59:01.609194 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/prometheus:v2.7.2)  2026-01-17 00:59:01.609198 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/grafana/grafana:6.7.4)  2026-01-17 00:59:01.609205 | orchestrator | skipping: [testbed-node-4] 2026-01-17 00:59:01.609209 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-01-17 00:59:01.609213 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/prometheus:v2.7.2)  2026-01-17 00:59:01.609216 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/grafana/grafana:6.7.4)  2026-01-17 00:59:01.609220 | orchestrator | skipping: [testbed-node-5] 2026-01-17 00:59:01.609224 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-01-17 00:59:01.609228 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/prometheus:v2.7.2)  2026-01-17 00:59:01.609231 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/grafana/grafana:6.7.4)  2026-01-17 00:59:01.609235 | orchestrator | skipping: [testbed-node-0] 2026-01-17 00:59:01.609239 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-01-17 00:59:01.609243 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/prom/prometheus:v2.7.2)  2026-01-17 00:59:01.609246 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/grafana/grafana:6.7.4)  2026-01-17 00:59:01.609250 | orchestrator | skipping: [testbed-node-1] 2026-01-17 00:59:01.609268 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-01-17 00:59:01.609272 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/prom/prometheus:v2.7.2)  2026-01-17 00:59:01.609276 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/grafana/grafana:6.7.4)  2026-01-17 00:59:01.609280 | orchestrator | skipping: [testbed-node-2] 2026-01-17 00:59:01.609284 | orchestrator | 2026-01-17 00:59:01.609288 | orchestrator | TASK [ceph-container-common : Pulling node-exporter container image] *********** 2026-01-17 00:59:01.609291 | orchestrator | Saturday 17 January 2026 00:49:55 +0000 (0:00:00.842) 0:02:28.221 ****** 2026-01-17 00:59:01.609295 | orchestrator | skipping: [testbed-node-3] 2026-01-17 00:59:01.609299 | orchestrator | skipping: [testbed-node-4] 2026-01-17 00:59:01.609303 | orchestrator | skipping: [testbed-node-5] 2026-01-17 00:59:01.609306 | orchestrator | skipping: [testbed-node-0] 2026-01-17 00:59:01.609310 | orchestrator | skipping: [testbed-node-1] 2026-01-17 00:59:01.609314 | orchestrator | skipping: [testbed-node-2] 2026-01-17 00:59:01.609318 | orchestrator | 2026-01-17 00:59:01.609321 | orchestrator | TASK [ceph-container-common : Export local ceph dev image] ********************* 2026-01-17 00:59:01.609325 | orchestrator | Saturday 17 January 2026 00:49:56 +0000 (0:00:00.844) 0:02:29.065 ****** 2026-01-17 00:59:01.609329 | orchestrator | skipping: [testbed-node-3] 2026-01-17 00:59:01.609333 | orchestrator | 2026-01-17 00:59:01.609337 | orchestrator | TASK [ceph-container-common : Copy ceph dev image file] ************************ 2026-01-17 00:59:01.609340 | orchestrator | Saturday 17 January 2026 00:49:56 +0000 (0:00:00.192) 0:02:29.257 ****** 2026-01-17 00:59:01.609344 | orchestrator | skipping: [testbed-node-3] 2026-01-17 00:59:01.609348 | orchestrator | skipping: [testbed-node-4] 2026-01-17 00:59:01.609352 | orchestrator | skipping: [testbed-node-5] 2026-01-17 00:59:01.609355 | orchestrator | skipping: [testbed-node-0] 2026-01-17 00:59:01.609359 | orchestrator | skipping: [testbed-node-1] 2026-01-17 00:59:01.609363 | orchestrator | skipping: [testbed-node-2] 2026-01-17 00:59:01.609367 | orchestrator | 2026-01-17 00:59:01.609370 | orchestrator | TASK [ceph-container-common : Load ceph dev image] ***************************** 2026-01-17 00:59:01.609374 | orchestrator | Saturday 17 January 2026 00:49:57 +0000 (0:00:00.722) 0:02:29.980 ****** 2026-01-17 00:59:01.609378 | orchestrator | skipping: [testbed-node-3] 2026-01-17 00:59:01.609385 | orchestrator | skipping: [testbed-node-4] 2026-01-17 00:59:01.609391 | orchestrator | skipping: [testbed-node-5] 2026-01-17 00:59:01.609398 | orchestrator | skipping: [testbed-node-0] 2026-01-17 00:59:01.609405 | orchestrator | skipping: [testbed-node-1] 2026-01-17 00:59:01.609411 | orchestrator | skipping: [testbed-node-2] 2026-01-17 00:59:01.609415 | orchestrator | 2026-01-17 00:59:01.609419 | orchestrator | TASK [ceph-container-common : Remove tmp ceph dev image file] ****************** 2026-01-17 00:59:01.609426 | orchestrator | Saturday 17 January 2026 00:49:58 +0000 (0:00:00.959) 0:02:30.939 ****** 2026-01-17 00:59:01.609430 | orchestrator | skipping: [testbed-node-3] 2026-01-17 00:59:01.609436 | orchestrator | skipping: [testbed-node-4] 2026-01-17 00:59:01.609443 | orchestrator | skipping: [testbed-node-5] 2026-01-17 00:59:01.609449 | orchestrator | skipping: [testbed-node-0] 2026-01-17 00:59:01.609456 | orchestrator | skipping: [testbed-node-1] 2026-01-17 00:59:01.609462 | orchestrator | skipping: [testbed-node-2] 2026-01-17 00:59:01.609469 | orchestrator | 2026-01-17 00:59:01.609475 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2026-01-17 00:59:01.609482 | orchestrator | Saturday 17 January 2026 00:49:59 +0000 (0:00:00.712) 0:02:31.652 ****** 2026-01-17 00:59:01.609489 | orchestrator | ok: [testbed-node-3] 2026-01-17 00:59:01.609495 | orchestrator | ok: [testbed-node-4] 2026-01-17 00:59:01.609502 | orchestrator | ok: [testbed-node-5] 2026-01-17 00:59:01.609506 | orchestrator | ok: [testbed-node-1] 2026-01-17 00:59:01.609509 | orchestrator | ok: [testbed-node-0] 2026-01-17 00:59:01.609513 | orchestrator | ok: [testbed-node-2] 2026-01-17 00:59:01.609517 | orchestrator | 2026-01-17 00:59:01.609521 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2026-01-17 00:59:01.609525 | orchestrator | Saturday 17 January 2026 00:50:02 +0000 (0:00:03.587) 0:02:35.240 ****** 2026-01-17 00:59:01.609528 | orchestrator | ok: [testbed-node-3] 2026-01-17 00:59:01.609532 | orchestrator | ok: [testbed-node-4] 2026-01-17 00:59:01.609536 | orchestrator | ok: [testbed-node-5] 2026-01-17 00:59:01.609540 | orchestrator | ok: [testbed-node-0] 2026-01-17 00:59:01.609543 | orchestrator | ok: [testbed-node-1] 2026-01-17 00:59:01.609547 | orchestrator | ok: [testbed-node-2] 2026-01-17 00:59:01.609551 | orchestrator | 2026-01-17 00:59:01.609555 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2026-01-17 00:59:01.609559 | orchestrator | Saturday 17 January 2026 00:50:03 +0000 (0:00:00.656) 0:02:35.896 ****** 2026-01-17 00:59:01.609566 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-17 00:59:01.609572 | orchestrator | 2026-01-17 00:59:01.609578 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release jewel] ********************* 2026-01-17 00:59:01.609585 | orchestrator | Saturday 17 January 2026 00:50:04 +0000 (0:00:01.133) 0:02:37.030 ****** 2026-01-17 00:59:01.609591 | orchestrator | skipping: [testbed-node-3] 2026-01-17 00:59:01.609597 | orchestrator | skipping: [testbed-node-4] 2026-01-17 00:59:01.609604 | orchestrator | skipping: [testbed-node-5] 2026-01-17 00:59:01.609608 | orchestrator | skipping: [testbed-node-0] 2026-01-17 00:59:01.609612 | orchestrator | skipping: [testbed-node-1] 2026-01-17 00:59:01.609616 | orchestrator | skipping: [testbed-node-2] 2026-01-17 00:59:01.609619 | orchestrator | 2026-01-17 00:59:01.609623 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release kraken] ******************** 2026-01-17 00:59:01.609627 | orchestrator | Saturday 17 January 2026 00:50:05 +0000 (0:00:00.899) 0:02:37.930 ****** 2026-01-17 00:59:01.609630 | orchestrator | skipping: [testbed-node-3] 2026-01-17 00:59:01.609634 | orchestrator | skipping: [testbed-node-4] 2026-01-17 00:59:01.609638 | orchestrator | skipping: [testbed-node-5] 2026-01-17 00:59:01.609642 | orchestrator | skipping: [testbed-node-0] 2026-01-17 00:59:01.609645 | orchestrator | skipping: [testbed-node-1] 2026-01-17 00:59:01.609649 | orchestrator | skipping: [testbed-node-2] 2026-01-17 00:59:01.609653 | orchestrator | 2026-01-17 00:59:01.609657 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release luminous] ****************** 2026-01-17 00:59:01.609660 | orchestrator | Saturday 17 January 2026 00:50:05 +0000 (0:00:00.585) 0:02:38.515 ****** 2026-01-17 00:59:01.609664 | orchestrator | skipping: [testbed-node-3] 2026-01-17 00:59:01.609668 | orchestrator | skipping: [testbed-node-4] 2026-01-17 00:59:01.609687 | orchestrator | skipping: [testbed-node-5] 2026-01-17 00:59:01.609691 | orchestrator | skipping: [testbed-node-0] 2026-01-17 00:59:01.609695 | orchestrator | skipping: [testbed-node-1] 2026-01-17 00:59:01.609702 | orchestrator | skipping: [testbed-node-2] 2026-01-17 00:59:01.609706 | orchestrator | 2026-01-17 00:59:01.609709 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release mimic] ********************* 2026-01-17 00:59:01.609713 | orchestrator | Saturday 17 January 2026 00:50:06 +0000 (0:00:00.777) 0:02:39.293 ****** 2026-01-17 00:59:01.609717 | orchestrator | skipping: [testbed-node-3] 2026-01-17 00:59:01.609721 | orchestrator | skipping: [testbed-node-4] 2026-01-17 00:59:01.609724 | orchestrator | skipping: [testbed-node-5] 2026-01-17 00:59:01.609728 | orchestrator | skipping: [testbed-node-0] 2026-01-17 00:59:01.609732 | orchestrator | skipping: [testbed-node-1] 2026-01-17 00:59:01.609736 | orchestrator | skipping: [testbed-node-2] 2026-01-17 00:59:01.609739 | orchestrator | 2026-01-17 00:59:01.609744 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release nautilus] ****************** 2026-01-17 00:59:01.609751 | orchestrator | Saturday 17 January 2026 00:50:07 +0000 (0:00:00.643) 0:02:39.936 ****** 2026-01-17 00:59:01.609758 | orchestrator | skipping: [testbed-node-3] 2026-01-17 00:59:01.609764 | orchestrator | skipping: [testbed-node-4] 2026-01-17 00:59:01.609771 | orchestrator | skipping: [testbed-node-5] 2026-01-17 00:59:01.609778 | orchestrator | skipping: [testbed-node-0] 2026-01-17 00:59:01.609785 | orchestrator | skipping: [testbed-node-1] 2026-01-17 00:59:01.609791 | orchestrator | skipping: [testbed-node-2] 2026-01-17 00:59:01.609795 | orchestrator | 2026-01-17 00:59:01.609799 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release octopus] ******************* 2026-01-17 00:59:01.609803 | orchestrator | Saturday 17 January 2026 00:50:08 +0000 (0:00:00.870) 0:02:40.806 ****** 2026-01-17 00:59:01.609807 | orchestrator | skipping: [testbed-node-3] 2026-01-17 00:59:01.609810 | orchestrator | skipping: [testbed-node-4] 2026-01-17 00:59:01.609814 | orchestrator | skipping: [testbed-node-5] 2026-01-17 00:59:01.609818 | orchestrator | skipping: [testbed-node-0] 2026-01-17 00:59:01.609822 | orchestrator | skipping: [testbed-node-1] 2026-01-17 00:59:01.609825 | orchestrator | skipping: [testbed-node-2] 2026-01-17 00:59:01.609829 | orchestrator | 2026-01-17 00:59:01.609833 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release pacific] ******************* 2026-01-17 00:59:01.609837 | orchestrator | Saturday 17 January 2026 00:50:09 +0000 (0:00:00.783) 0:02:41.590 ****** 2026-01-17 00:59:01.609840 | orchestrator | skipping: [testbed-node-3] 2026-01-17 00:59:01.609844 | orchestrator | skipping: [testbed-node-4] 2026-01-17 00:59:01.609848 | orchestrator | skipping: [testbed-node-5] 2026-01-17 00:59:01.609852 | orchestrator | skipping: [testbed-node-0] 2026-01-17 00:59:01.609855 | orchestrator | skipping: [testbed-node-1] 2026-01-17 00:59:01.609859 | orchestrator | skipping: [testbed-node-2] 2026-01-17 00:59:01.609863 | orchestrator | 2026-01-17 00:59:01.609866 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release quincy] ******************** 2026-01-17 00:59:01.609872 | orchestrator | Saturday 17 January 2026 00:50:10 +0000 (0:00:01.055) 0:02:42.645 ****** 2026-01-17 00:59:01.609878 | orchestrator | skipping: [testbed-node-3] 2026-01-17 00:59:01.609885 | orchestrator | skipping: [testbed-node-4] 2026-01-17 00:59:01.609891 | orchestrator | skipping: [testbed-node-5] 2026-01-17 00:59:01.609898 | orchestrator | skipping: [testbed-node-0] 2026-01-17 00:59:01.609902 | orchestrator | skipping: [testbed-node-1] 2026-01-17 00:59:01.609906 | orchestrator | skipping: [testbed-node-2] 2026-01-17 00:59:01.609935 | orchestrator | 2026-01-17 00:59:01.609939 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release reef] ********************** 2026-01-17 00:59:01.609943 | orchestrator | Saturday 17 January 2026 00:50:10 +0000 (0:00:00.791) 0:02:43.436 ****** 2026-01-17 00:59:01.609948 | orchestrator | ok: [testbed-node-3] 2026-01-17 00:59:01.609954 | orchestrator | ok: [testbed-node-4] 2026-01-17 00:59:01.609960 | orchestrator | ok: [testbed-node-5] 2026-01-17 00:59:01.609967 | orchestrator | ok: [testbed-node-0] 2026-01-17 00:59:01.609973 | orchestrator | ok: [testbed-node-1] 2026-01-17 00:59:01.609980 | orchestrator | ok: [testbed-node-2] 2026-01-17 00:59:01.609986 | orchestrator | 2026-01-17 00:59:01.609993 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2026-01-17 00:59:01.610005 | orchestrator | Saturday 17 January 2026 00:50:12 +0000 (0:00:01.740) 0:02:45.176 ****** 2026-01-17 00:59:01.610009 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-17 00:59:01.610032 | orchestrator | 2026-01-17 00:59:01.610036 | orchestrator | TASK [ceph-config : Create ceph initial directories] *************************** 2026-01-17 00:59:01.610042 | orchestrator | Saturday 17 January 2026 00:50:14 +0000 (0:00:01.629) 0:02:46.806 ****** 2026-01-17 00:59:01.610046 | orchestrator | changed: [testbed-node-4] => (item=/etc/ceph) 2026-01-17 00:59:01.610052 | orchestrator | changed: [testbed-node-3] => (item=/etc/ceph) 2026-01-17 00:59:01.610058 | orchestrator | changed: [testbed-node-5] => (item=/etc/ceph) 2026-01-17 00:59:01.610065 | orchestrator | changed: [testbed-node-0] => (item=/etc/ceph) 2026-01-17 00:59:01.610071 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/) 2026-01-17 00:59:01.610078 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/) 2026-01-17 00:59:01.610084 | orchestrator | changed: [testbed-node-2] => (item=/etc/ceph) 2026-01-17 00:59:01.610091 | orchestrator | changed: [testbed-node-1] => (item=/etc/ceph) 2026-01-17 00:59:01.610098 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/) 2026-01-17 00:59:01.610102 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mon) 2026-01-17 00:59:01.610106 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/) 2026-01-17 00:59:01.610110 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/) 2026-01-17 00:59:01.610114 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/) 2026-01-17 00:59:01.610117 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mon) 2026-01-17 00:59:01.610121 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mon) 2026-01-17 00:59:01.610125 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/mon) 2026-01-17 00:59:01.610129 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/osd) 2026-01-17 00:59:01.610132 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/mon) 2026-01-17 00:59:01.610155 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/mon) 2026-01-17 00:59:01.610159 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/osd) 2026-01-17 00:59:01.610163 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/osd) 2026-01-17 00:59:01.610167 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/osd) 2026-01-17 00:59:01.610171 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mds) 2026-01-17 00:59:01.610174 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mds) 2026-01-17 00:59:01.610178 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mds) 2026-01-17 00:59:01.610182 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/mds) 2026-01-17 00:59:01.610186 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/tmp) 2026-01-17 00:59:01.610189 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/osd) 2026-01-17 00:59:01.610193 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/osd) 2026-01-17 00:59:01.610197 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/tmp) 2026-01-17 00:59:01.610201 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/tmp) 2026-01-17 00:59:01.610204 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/tmp) 2026-01-17 00:59:01.610208 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/mds) 2026-01-17 00:59:01.610212 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/crash) 2026-01-17 00:59:01.610215 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/mds) 2026-01-17 00:59:01.610219 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/crash) 2026-01-17 00:59:01.610223 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/crash) 2026-01-17 00:59:01.610227 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/radosgw) 2026-01-17 00:59:01.610234 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/tmp) 2026-01-17 00:59:01.610238 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/tmp) 2026-01-17 00:59:01.610242 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/radosgw) 2026-01-17 00:59:01.610245 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/crash) 2026-01-17 00:59:01.610249 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/radosgw) 2026-01-17 00:59:01.610253 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rgw) 2026-01-17 00:59:01.610257 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/crash) 2026-01-17 00:59:01.610264 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/crash) 2026-01-17 00:59:01.610270 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rgw) 2026-01-17 00:59:01.610277 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/radosgw) 2026-01-17 00:59:01.610283 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mgr) 2026-01-17 00:59:01.610289 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rgw) 2026-01-17 00:59:01.610292 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/radosgw) 2026-01-17 00:59:01.610296 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/radosgw) 2026-01-17 00:59:01.610300 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mgr) 2026-01-17 00:59:01.610305 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mds) 2026-01-17 00:59:01.610311 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mgr) 2026-01-17 00:59:01.610318 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rgw) 2026-01-17 00:59:01.610325 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rgw) 2026-01-17 00:59:01.610331 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rgw) 2026-01-17 00:59:01.610337 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mds) 2026-01-17 00:59:01.610343 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-osd) 2026-01-17 00:59:01.610352 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mds) 2026-01-17 00:59:01.610359 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-mgr) 2026-01-17 00:59:01.610366 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-mgr) 2026-01-17 00:59:01.610372 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mgr) 2026-01-17 00:59:01.610376 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-osd) 2026-01-17 00:59:01.610380 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd) 2026-01-17 00:59:01.610383 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-mds) 2026-01-17 00:59:01.610387 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-osd) 2026-01-17 00:59:01.610391 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-mds) 2026-01-17 00:59:01.610395 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mds) 2026-01-17 00:59:01.610399 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd) 2026-01-17 00:59:01.610402 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-01-17 00:59:01.610406 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-osd) 2026-01-17 00:59:01.610410 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd) 2026-01-17 00:59:01.610414 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-osd) 2026-01-17 00:59:01.610417 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-osd) 2026-01-17 00:59:01.610439 | orchestrator | changed: [testbed-node-4] => (item=/var/run/ceph) 2026-01-17 00:59:01.610446 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-01-17 00:59:01.610452 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-01-17 00:59:01.610460 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rbd) 2026-01-17 00:59:01.610464 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rbd) 2026-01-17 00:59:01.610468 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd) 2026-01-17 00:59:01.610472 | orchestrator | changed: [testbed-node-0] => (item=/var/run/ceph) 2026-01-17 00:59:01.610475 | orchestrator | changed: [testbed-node-4] => (item=/var/log/ceph) 2026-01-17 00:59:01.610479 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-01-17 00:59:01.610483 | orchestrator | changed: [testbed-node-5] => (item=/var/run/ceph) 2026-01-17 00:59:01.610487 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-01-17 00:59:01.610491 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-01-17 00:59:01.610494 | orchestrator | changed: [testbed-node-0] => (item=/var/log/ceph) 2026-01-17 00:59:01.610498 | orchestrator | changed: [testbed-node-1] => (item=/var/run/ceph) 2026-01-17 00:59:01.610502 | orchestrator | changed: [testbed-node-5] => (item=/var/log/ceph) 2026-01-17 00:59:01.610506 | orchestrator | changed: [testbed-node-2] => (item=/var/run/ceph) 2026-01-17 00:59:01.610509 | orchestrator | changed: [testbed-node-3] => (item=/var/run/ceph) 2026-01-17 00:59:01.610513 | orchestrator | changed: [testbed-node-1] => (item=/var/log/ceph) 2026-01-17 00:59:01.610517 | orchestrator | changed: [testbed-node-2] => (item=/var/log/ceph) 2026-01-17 00:59:01.610522 | orchestrator | changed: [testbed-node-3] => (item=/var/log/ceph) 2026-01-17 00:59:01.610529 | orchestrator | 2026-01-17 00:59:01.610535 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2026-01-17 00:59:01.610542 | orchestrator | Saturday 17 January 2026 00:50:21 +0000 (0:00:06.744) 0:02:53.551 ****** 2026-01-17 00:59:01.610549 | orchestrator | skipping: [testbed-node-0] 2026-01-17 00:59:01.610553 | orchestrator | skipping: [testbed-node-1] 2026-01-17 00:59:01.610556 | orchestrator | skipping: [testbed-node-2] 2026-01-17 00:59:01.610561 | orchestrator | included: /ansible/roles/ceph-config/tasks/rgw_systemd_environment_file.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-17 00:59:01.610565 | orchestrator | 2026-01-17 00:59:01.610568 | orchestrator | TASK [ceph-config : Create rados gateway instance directories] ***************** 2026-01-17 00:59:01.610572 | orchestrator | Saturday 17 January 2026 00:50:21 +0000 (0:00:00.896) 0:02:54.447 ****** 2026-01-17 00:59:01.610576 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-01-17 00:59:01.610580 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-01-17 00:59:01.610584 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-01-17 00:59:01.610590 | orchestrator | 2026-01-17 00:59:01.610597 | orchestrator | TASK [ceph-config : Generate environment file] ********************************* 2026-01-17 00:59:01.610603 | orchestrator | Saturday 17 January 2026 00:50:23 +0000 (0:00:01.440) 0:02:55.888 ****** 2026-01-17 00:59:01.610610 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-01-17 00:59:01.610616 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-01-17 00:59:01.610626 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-01-17 00:59:01.610630 | orchestrator | 2026-01-17 00:59:01.610633 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2026-01-17 00:59:01.610637 | orchestrator | Saturday 17 January 2026 00:50:25 +0000 (0:00:01.933) 0:02:57.821 ****** 2026-01-17 00:59:01.610644 | orchestrator | ok: [testbed-node-3] 2026-01-17 00:59:01.610648 | orchestrator | ok: [testbed-node-4] 2026-01-17 00:59:01.610651 | orchestrator | ok: [testbed-node-5] 2026-01-17 00:59:01.610656 | orchestrator | skipping: [testbed-node-0] 2026-01-17 00:59:01.610662 | orchestrator | skipping: [testbed-node-1] 2026-01-17 00:59:01.610668 | orchestrator | skipping: [testbed-node-2] 2026-01-17 00:59:01.610675 | orchestrator | 2026-01-17 00:59:01.610681 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2026-01-17 00:59:01.610687 | orchestrator | Saturday 17 January 2026 00:50:26 +0000 (0:00:00.830) 0:02:58.651 ****** 2026-01-17 00:59:01.610690 | orchestrator | ok: [testbed-node-3] 2026-01-17 00:59:01.610694 | orchestrator | ok: [testbed-node-4] 2026-01-17 00:59:01.610698 | orchestrator | ok: [testbed-node-5] 2026-01-17 00:59:01.610701 | orchestrator | skipping: [testbed-node-0] 2026-01-17 00:59:01.610705 | orchestrator | skipping: [testbed-node-1] 2026-01-17 00:59:01.610709 | orchestrator | skipping: [testbed-node-2] 2026-01-17 00:59:01.610712 | orchestrator | 2026-01-17 00:59:01.610716 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2026-01-17 00:59:01.610720 | orchestrator | Saturday 17 January 2026 00:50:27 +0000 (0:00:00.950) 0:02:59.602 ****** 2026-01-17 00:59:01.610723 | orchestrator | skipping: [testbed-node-3] 2026-01-17 00:59:01.610727 | orchestrator | skipping: [testbed-node-4] 2026-01-17 00:59:01.610731 | orchestrator | skipping: [testbed-node-5] 2026-01-17 00:59:01.610734 | orchestrator | skipping: [testbed-node-0] 2026-01-17 00:59:01.610738 | orchestrator | skipping: [testbed-node-1] 2026-01-17 00:59:01.610742 | orchestrator | skipping: [testbed-node-2] 2026-01-17 00:59:01.610746 | orchestrator | 2026-01-17 00:59:01.610764 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2026-01-17 00:59:01.610769 | orchestrator | Saturday 17 January 2026 00:50:27 +0000 (0:00:00.536) 0:03:00.139 ****** 2026-01-17 00:59:01.610773 | orchestrator | skipping: [testbed-node-3] 2026-01-17 00:59:01.610776 | orchestrator | skipping: [testbed-node-4] 2026-01-17 00:59:01.610780 | orchestrator | skipping: [testbed-node-5] 2026-01-17 00:59:01.610784 | orchestrator | skipping: [testbed-node-0] 2026-01-17 00:59:01.610787 | orchestrator | skipping: [testbed-node-1] 2026-01-17 00:59:01.610791 | orchestrator | skipping: [testbed-node-2] 2026-01-17 00:59:01.610795 | orchestrator | 2026-01-17 00:59:01.610799 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2026-01-17 00:59:01.610802 | orchestrator | Saturday 17 January 2026 00:50:28 +0000 (0:00:00.708) 0:03:00.847 ****** 2026-01-17 00:59:01.610806 | orchestrator | skipping: [testbed-node-3] 2026-01-17 00:59:01.610810 | orchestrator | skipping: [testbed-node-4] 2026-01-17 00:59:01.610814 | orchestrator | skipping: [testbed-node-5] 2026-01-17 00:59:01.610818 | orchestrator | skipping: [testbed-node-0] 2026-01-17 00:59:01.610821 | orchestrator | skipping: [testbed-node-1] 2026-01-17 00:59:01.610825 | orchestrator | skipping: [testbed-node-2] 2026-01-17 00:59:01.610829 | orchestrator | 2026-01-17 00:59:01.610833 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2026-01-17 00:59:01.610837 | orchestrator | Saturday 17 January 2026 00:50:28 +0000 (0:00:00.536) 0:03:01.384 ****** 2026-01-17 00:59:01.610840 | orchestrator | skipping: [testbed-node-3] 2026-01-17 00:59:01.610844 | orchestrator | skipping: [testbed-node-4] 2026-01-17 00:59:01.610848 | orchestrator | skipping: [testbed-node-5] 2026-01-17 00:59:01.610851 | orchestrator | skipping: [testbed-node-0] 2026-01-17 00:59:01.610855 | orchestrator | skipping: [testbed-node-1] 2026-01-17 00:59:01.610859 | orchestrator | skipping: [testbed-node-2] 2026-01-17 00:59:01.610863 | orchestrator | 2026-01-17 00:59:01.610867 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2026-01-17 00:59:01.610870 | orchestrator | Saturday 17 January 2026 00:50:29 +0000 (0:00:00.680) 0:03:02.064 ****** 2026-01-17 00:59:01.610874 | orchestrator | skipping: [testbed-node-3] 2026-01-17 00:59:01.610878 | orchestrator | skipping: [testbed-node-4] 2026-01-17 00:59:01.610881 | orchestrator | skipping: [testbed-node-5] 2026-01-17 00:59:01.610888 | orchestrator | skipping: [testbed-node-0] 2026-01-17 00:59:01.610892 | orchestrator | skipping: [testbed-node-1] 2026-01-17 00:59:01.610896 | orchestrator | skipping: [testbed-node-2] 2026-01-17 00:59:01.610900 | orchestrator | 2026-01-17 00:59:01.610903 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2026-01-17 00:59:01.610919 | orchestrator | Saturday 17 January 2026 00:50:30 +0000 (0:00:00.683) 0:03:02.748 ****** 2026-01-17 00:59:01.610924 | orchestrator | skipping: [testbed-node-3] 2026-01-17 00:59:01.610928 | orchestrator | skipping: [testbed-node-4] 2026-01-17 00:59:01.610931 | orchestrator | skipping: [testbed-node-5] 2026-01-17 00:59:01.610935 | orchestrator | skipping: [testbed-node-0] 2026-01-17 00:59:01.610939 | orchestrator | skipping: [testbed-node-1] 2026-01-17 00:59:01.610943 | orchestrator | skipping: [testbed-node-2] 2026-01-17 00:59:01.610946 | orchestrator | 2026-01-17 00:59:01.610950 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2026-01-17 00:59:01.610954 | orchestrator | Saturday 17 January 2026 00:50:31 +0000 (0:00:00.910) 0:03:03.658 ****** 2026-01-17 00:59:01.610957 | orchestrator | skipping: [testbed-node-0] 2026-01-17 00:59:01.610961 | orchestrator | skipping: [testbed-node-1] 2026-01-17 00:59:01.610965 | orchestrator | skipping: [testbed-node-2] 2026-01-17 00:59:01.610969 | orchestrator | ok: [testbed-node-4] 2026-01-17 00:59:01.610972 | orchestrator | ok: [testbed-node-3] 2026-01-17 00:59:01.610976 | orchestrator | ok: [testbed-node-5] 2026-01-17 00:59:01.610980 | orchestrator | 2026-01-17 00:59:01.610983 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2026-01-17 00:59:01.610987 | orchestrator | Saturday 17 January 2026 00:50:34 +0000 (0:00:03.238) 0:03:06.896 ****** 2026-01-17 00:59:01.610991 | orchestrator | ok: [testbed-node-3] 2026-01-17 00:59:01.610995 | orchestrator | ok: [testbed-node-4] 2026-01-17 00:59:01.610998 | orchestrator | ok: [testbed-node-5] 2026-01-17 00:59:01.611002 | orchestrator | skipping: [testbed-node-0] 2026-01-17 00:59:01.611006 | orchestrator | skipping: [testbed-node-1] 2026-01-17 00:59:01.611009 | orchestrator | skipping: [testbed-node-2] 2026-01-17 00:59:01.611013 | orchestrator | 2026-01-17 00:59:01.611017 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2026-01-17 00:59:01.611023 | orchestrator | Saturday 17 January 2026 00:50:35 +0000 (0:00:01.135) 0:03:08.032 ****** 2026-01-17 00:59:01.611027 | orchestrator | ok: [testbed-node-3] 2026-01-17 00:59:01.611031 | orchestrator | ok: [testbed-node-4] 2026-01-17 00:59:01.611034 | orchestrator | ok: [testbed-node-5] 2026-01-17 00:59:01.611038 | orchestrator | skipping: [testbed-node-0] 2026-01-17 00:59:01.611042 | orchestrator | skipping: [testbed-node-1] 2026-01-17 00:59:01.611045 | orchestrator | skipping: [testbed-node-2] 2026-01-17 00:59:01.611049 | orchestrator | 2026-01-17 00:59:01.611053 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2026-01-17 00:59:01.611057 | orchestrator | Saturday 17 January 2026 00:50:36 +0000 (0:00:00.900) 0:03:08.932 ****** 2026-01-17 00:59:01.611060 | orchestrator | skipping: [testbed-node-3] 2026-01-17 00:59:01.611064 | orchestrator | skipping: [testbed-node-4] 2026-01-17 00:59:01.611068 | orchestrator | skipping: [testbed-node-5] 2026-01-17 00:59:01.611071 | orchestrator | skipping: [testbed-node-0] 2026-01-17 00:59:01.611075 | orchestrator | skipping: [testbed-node-1] 2026-01-17 00:59:01.611079 | orchestrator | skipping: [testbed-node-2] 2026-01-17 00:59:01.611082 | orchestrator | 2026-01-17 00:59:01.611086 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2026-01-17 00:59:01.611090 | orchestrator | Saturday 17 January 2026 00:50:37 +0000 (0:00:01.217) 0:03:10.150 ****** 2026-01-17 00:59:01.611094 | orchestrator | ok: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-01-17 00:59:01.611098 | orchestrator | ok: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-01-17 00:59:01.611101 | orchestrator | ok: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-01-17 00:59:01.611108 | orchestrator | skipping: [testbed-node-0] 2026-01-17 00:59:01.611127 | orchestrator | skipping: [testbed-node-1] 2026-01-17 00:59:01.611131 | orchestrator | skipping: [testbed-node-2] 2026-01-17 00:59:01.611135 | orchestrator | 2026-01-17 00:59:01.611139 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2026-01-17 00:59:01.611142 | orchestrator | Saturday 17 January 2026 00:50:38 +0000 (0:00:00.674) 0:03:10.825 ****** 2026-01-17 00:59:01.611148 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'client.rgw.default.testbed-node-4.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.14:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log'}])  2026-01-17 00:59:01.611153 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'client.rgw.default.testbed-node-4.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.14:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.14:8081'}])  2026-01-17 00:59:01.611158 | orchestrator | skipping: [testbed-node-4] 2026-01-17 00:59:01.611162 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'client.rgw.default.testbed-node-3.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.13:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log'}])  2026-01-17 00:59:01.611166 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'client.rgw.default.testbed-node-3.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.13:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.13:8081'}])  2026-01-17 00:59:01.611170 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'client.rgw.default.testbed-node-5.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.15:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log'}])  2026-01-17 00:59:01.611174 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'client.rgw.default.testbed-node-5.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.15:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.15:8081'}])  2026-01-17 00:59:01.611177 | orchestrator | skipping: [testbed-node-3] 2026-01-17 00:59:01.611181 | orchestrator | skipping: [testbed-node-5] 2026-01-17 00:59:01.611185 | orchestrator | skipping: [testbed-node-0] 2026-01-17 00:59:01.611189 | orchestrator | skipping: [testbed-node-1] 2026-01-17 00:59:01.611193 | orchestrator | skipping: [testbed-node-2] 2026-01-17 00:59:01.611196 | orchestrator | 2026-01-17 00:59:01.611200 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2026-01-17 00:59:01.611204 | orchestrator | Saturday 17 January 2026 00:50:39 +0000 (0:00:00.865) 0:03:11.690 ****** 2026-01-17 00:59:01.611208 | orchestrator | skipping: [testbed-node-4] 2026-01-17 00:59:01.611211 | orchestrator | skipping: [testbed-node-3] 2026-01-17 00:59:01.611215 | orchestrator | skipping: [testbed-node-5] 2026-01-17 00:59:01.611219 | orchestrator | skipping: [testbed-node-0] 2026-01-17 00:59:01.611223 | orchestrator | skipping: [testbed-node-1] 2026-01-17 00:59:01.611226 | orchestrator | skipping: [testbed-node-2] 2026-01-17 00:59:01.611230 | orchestrator | 2026-01-17 00:59:01.611234 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2026-01-17 00:59:01.611241 | orchestrator | Saturday 17 January 2026 00:50:39 +0000 (0:00:00.709) 0:03:12.400 ****** 2026-01-17 00:59:01.611245 | orchestrator | skipping: [testbed-node-3] 2026-01-17 00:59:01.611249 | orchestrator | skipping: [testbed-node-4] 2026-01-17 00:59:01.611256 | orchestrator | skipping: [testbed-node-5] 2026-01-17 00:59:01.611259 | orchestrator | skipping: [testbed-node-0] 2026-01-17 00:59:01.611263 | orchestrator | skipping: [testbed-node-1] 2026-01-17 00:59:01.611267 | orchestrator | skipping: [testbed-node-2] 2026-01-17 00:59:01.611271 | orchestrator | 2026-01-17 00:59:01.611274 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-01-17 00:59:01.611278 | orchestrator | Saturday 17 January 2026 00:50:40 +0000 (0:00:01.010) 0:03:13.410 ****** 2026-01-17 00:59:01.611282 | orchestrator | skipping: [testbed-node-3] 2026-01-17 00:59:01.611286 | orchestrator | skipping: [testbed-node-4] 2026-01-17 00:59:01.611289 | orchestrator | skipping: [testbed-node-0] 2026-01-17 00:59:01.611293 | orchestrator | skipping: [testbed-node-1] 2026-01-17 00:59:01.611297 | orchestrator | skipping: [testbed-node-2] 2026-01-17 00:59:01.611301 | orchestrator | skipping: [testbed-node-5] 2026-01-17 00:59:01.611304 | orchestrator | 2026-01-17 00:59:01.611308 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-01-17 00:59:01.611312 | orchestrator | Saturday 17 January 2026 00:50:41 +0000 (0:00:00.688) 0:03:14.099 ****** 2026-01-17 00:59:01.611316 | orchestrator | skipping: [testbed-node-3] 2026-01-17 00:59:01.611319 | orchestrator | skipping: [testbed-node-4] 2026-01-17 00:59:01.611323 | orchestrator | skipping: [testbed-node-5] 2026-01-17 00:59:01.611327 | orchestrator | skipping: [testbed-node-0] 2026-01-17 00:59:01.611331 | orchestrator | skipping: [testbed-node-1] 2026-01-17 00:59:01.611334 | orchestrator | skipping: [testbed-node-2] 2026-01-17 00:59:01.611338 | orchestrator | 2026-01-17 00:59:01.611342 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-01-17 00:59:01.611357 | orchestrator | Saturday 17 January 2026 00:50:42 +0000 (0:00:00.765) 0:03:14.864 ****** 2026-01-17 00:59:01.611362 | orchestrator | skipping: [testbed-node-3] 2026-01-17 00:59:01.611366 | orchestrator | skipping: [testbed-node-4] 2026-01-17 00:59:01.611369 | orchestrator | skipping: [testbed-node-5] 2026-01-17 00:59:01.611373 | orchestrator | skipping: [testbed-node-0] 2026-01-17 00:59:01.611377 | orchestrator | skipping: [testbed-node-1] 2026-01-17 00:59:01.611380 | orchestrator | skipping: [testbed-node-2] 2026-01-17 00:59:01.611384 | orchestrator | 2026-01-17 00:59:01.611388 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-01-17 00:59:01.611392 | orchestrator | Saturday 17 January 2026 00:50:42 +0000 (0:00:00.592) 0:03:15.456 ****** 2026-01-17 00:59:01.611395 | orchestrator | ok: [testbed-node-3] 2026-01-17 00:59:01.611399 | orchestrator | ok: [testbed-node-4] 2026-01-17 00:59:01.611403 | orchestrator | ok: [testbed-node-5] 2026-01-17 00:59:01.611407 | orchestrator | skipping: [testbed-node-0] 2026-01-17 00:59:01.611410 | orchestrator | skipping: [testbed-node-1] 2026-01-17 00:59:01.611414 | orchestrator | skipping: [testbed-node-2] 2026-01-17 00:59:01.611418 | orchestrator | 2026-01-17 00:59:01.611421 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-01-17 00:59:01.611425 | orchestrator | Saturday 17 January 2026 00:50:43 +0000 (0:00:00.831) 0:03:16.287 ****** 2026-01-17 00:59:01.611429 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-01-17 00:59:01.611432 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-01-17 00:59:01.611436 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-01-17 00:59:01.611440 | orchestrator | skipping: [testbed-node-3] 2026-01-17 00:59:01.611444 | orchestrator | 2026-01-17 00:59:01.611447 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-01-17 00:59:01.611451 | orchestrator | Saturday 17 January 2026 00:50:44 +0000 (0:00:00.535) 0:03:16.823 ****** 2026-01-17 00:59:01.611455 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-01-17 00:59:01.611458 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-01-17 00:59:01.611462 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-01-17 00:59:01.611466 | orchestrator | skipping: [testbed-node-3] 2026-01-17 00:59:01.611472 | orchestrator | 2026-01-17 00:59:01.611476 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-01-17 00:59:01.611479 | orchestrator | Saturday 17 January 2026 00:50:44 +0000 (0:00:00.313) 0:03:17.137 ****** 2026-01-17 00:59:01.611483 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-01-17 00:59:01.611487 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-01-17 00:59:01.611490 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-01-17 00:59:01.611494 | orchestrator | skipping: [testbed-node-3] 2026-01-17 00:59:01.611498 | orchestrator | 2026-01-17 00:59:01.611502 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-01-17 00:59:01.611505 | orchestrator | Saturday 17 January 2026 00:50:45 +0000 (0:00:00.388) 0:03:17.525 ****** 2026-01-17 00:59:01.611509 | orchestrator | ok: [testbed-node-3] 2026-01-17 00:59:01.611513 | orchestrator | ok: [testbed-node-4] 2026-01-17 00:59:01.611517 | orchestrator | ok: [testbed-node-5] 2026-01-17 00:59:01.611520 | orchestrator | skipping: [testbed-node-0] 2026-01-17 00:59:01.611524 | orchestrator | skipping: [testbed-node-1] 2026-01-17 00:59:01.611528 | orchestrator | skipping: [testbed-node-2] 2026-01-17 00:59:01.611531 | orchestrator | 2026-01-17 00:59:01.611535 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-01-17 00:59:01.611539 | orchestrator | Saturday 17 January 2026 00:50:45 +0000 (0:00:00.546) 0:03:18.072 ****** 2026-01-17 00:59:01.611543 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-01-17 00:59:01.611546 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-01-17 00:59:01.611550 | orchestrator | skipping: [testbed-node-0] => (item=0)  2026-01-17 00:59:01.611554 | orchestrator | skipping: [testbed-node-0] 2026-01-17 00:59:01.611557 | orchestrator | skipping: [testbed-node-1] => (item=0)  2026-01-17 00:59:01.611561 | orchestrator | skipping: [testbed-node-1] 2026-01-17 00:59:01.611565 | orchestrator | skipping: [testbed-node-2] => (item=0)  2026-01-17 00:59:01.611569 | orchestrator | skipping: [testbed-node-2] 2026-01-17 00:59:01.611572 | orchestrator | ok: [testbed-node-5] => (item=0) 2026-01-17 00:59:01.611576 | orchestrator | 2026-01-17 00:59:01.611580 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2026-01-17 00:59:01.611586 | orchestrator | Saturday 17 January 2026 00:50:47 +0000 (0:00:02.307) 0:03:20.379 ****** 2026-01-17 00:59:01.611590 | orchestrator | changed: [testbed-node-3] 2026-01-17 00:59:01.611593 | orchestrator | changed: [testbed-node-5] 2026-01-17 00:59:01.611597 | orchestrator | changed: [testbed-node-4] 2026-01-17 00:59:01.611601 | orchestrator | changed: [testbed-node-0] 2026-01-17 00:59:01.611604 | orchestrator | changed: [testbed-node-1] 2026-01-17 00:59:01.611608 | orchestrator | changed: [testbed-node-2] 2026-01-17 00:59:01.611612 | orchestrator | 2026-01-17 00:59:01.611615 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-01-17 00:59:01.611619 | orchestrator | Saturday 17 January 2026 00:50:50 +0000 (0:00:02.933) 0:03:23.312 ****** 2026-01-17 00:59:01.611623 | orchestrator | changed: [testbed-node-5] 2026-01-17 00:59:01.611627 | orchestrator | changed: [testbed-node-4] 2026-01-17 00:59:01.611630 | orchestrator | changed: [testbed-node-0] 2026-01-17 00:59:01.611634 | orchestrator | changed: [testbed-node-3] 2026-01-17 00:59:01.611638 | orchestrator | changed: [testbed-node-1] 2026-01-17 00:59:01.611641 | orchestrator | changed: [testbed-node-2] 2026-01-17 00:59:01.611645 | orchestrator | 2026-01-17 00:59:01.611649 | orchestrator | RUNNING HANDLER [ceph-handler : Mons handler] ********************************** 2026-01-17 00:59:01.611653 | orchestrator | Saturday 17 January 2026 00:50:51 +0000 (0:00:01.115) 0:03:24.427 ****** 2026-01-17 00:59:01.611656 | orchestrator | skipping: [testbed-node-3] 2026-01-17 00:59:01.611660 | orchestrator | skipping: [testbed-node-4] 2026-01-17 00:59:01.611664 | orchestrator | skipping: [testbed-node-5] 2026-01-17 00:59:01.611667 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mons.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-17 00:59:01.611671 | orchestrator | 2026-01-17 00:59:01.611675 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called before restart] ******** 2026-01-17 00:59:01.611692 | orchestrator | Saturday 17 January 2026 00:50:53 +0000 (0:00:01.152) 0:03:25.580 ****** 2026-01-17 00:59:01.611697 | orchestrator | ok: [testbed-node-0] 2026-01-17 00:59:01.611701 | orchestrator | ok: [testbed-node-1] 2026-01-17 00:59:01.611704 | orchestrator | ok: [testbed-node-2] 2026-01-17 00:59:01.611708 | orchestrator | 2026-01-17 00:59:01.611712 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mon restart script] *********************** 2026-01-17 00:59:01.611716 | orchestrator | Saturday 17 January 2026 00:50:53 +0000 (0:00:00.365) 0:03:25.945 ****** 2026-01-17 00:59:01.611720 | orchestrator | changed: [testbed-node-0] 2026-01-17 00:59:01.611723 | orchestrator | changed: [testbed-node-1] 2026-01-17 00:59:01.611727 | orchestrator | changed: [testbed-node-2] 2026-01-17 00:59:01.611731 | orchestrator | 2026-01-17 00:59:01.611734 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mon daemon(s)] ******************** 2026-01-17 00:59:01.611738 | orchestrator | Saturday 17 January 2026 00:50:54 +0000 (0:00:01.404) 0:03:27.350 ****** 2026-01-17 00:59:01.611742 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-01-17 00:59:01.611746 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-01-17 00:59:01.611749 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-01-17 00:59:01.611753 | orchestrator | skipping: [testbed-node-0] 2026-01-17 00:59:01.611757 | orchestrator | 2026-01-17 00:59:01.611761 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called after restart] ********* 2026-01-17 00:59:01.611764 | orchestrator | Saturday 17 January 2026 00:50:55 +0000 (0:00:00.838) 0:03:28.188 ****** 2026-01-17 00:59:01.611768 | orchestrator | ok: [testbed-node-0] 2026-01-17 00:59:01.611772 | orchestrator | ok: [testbed-node-1] 2026-01-17 00:59:01.611776 | orchestrator | ok: [testbed-node-2] 2026-01-17 00:59:01.611779 | orchestrator | 2026-01-17 00:59:01.611783 | orchestrator | RUNNING HANDLER [ceph-handler : Osds handler] ********************************** 2026-01-17 00:59:01.611787 | orchestrator | Saturday 17 January 2026 00:50:56 +0000 (0:00:00.403) 0:03:28.592 ****** 2026-01-17 00:59:01.611790 | orchestrator | skipping: [testbed-node-0] 2026-01-17 00:59:01.611794 | orchestrator | skipping: [testbed-node-1] 2026-01-17 00:59:01.611798 | orchestrator | skipping: [testbed-node-2] 2026-01-17 00:59:01.611802 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-17 00:59:01.611805 | orchestrator | 2026-01-17 00:59:01.611809 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact trigger_restart] ********************** 2026-01-17 00:59:01.611813 | orchestrator | Saturday 17 January 2026 00:50:57 +0000 (0:00:01.130) 0:03:29.722 ****** 2026-01-17 00:59:01.611827 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-01-17 00:59:01.611831 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-01-17 00:59:01.611834 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-01-17 00:59:01.611838 | orchestrator | skipping: [testbed-node-3] 2026-01-17 00:59:01.611842 | orchestrator | 2026-01-17 00:59:01.611846 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called before restart] ******** 2026-01-17 00:59:01.611850 | orchestrator | Saturday 17 January 2026 00:50:57 +0000 (0:00:00.563) 0:03:30.286 ****** 2026-01-17 00:59:01.611853 | orchestrator | skipping: [testbed-node-3] 2026-01-17 00:59:01.611857 | orchestrator | skipping: [testbed-node-4] 2026-01-17 00:59:01.611861 | orchestrator | skipping: [testbed-node-5] 2026-01-17 00:59:01.611865 | orchestrator | 2026-01-17 00:59:01.611868 | orchestrator | RUNNING HANDLER [ceph-handler : Unset noup flag] ******************************* 2026-01-17 00:59:01.611872 | orchestrator | Saturday 17 January 2026 00:50:58 +0000 (0:00:00.406) 0:03:30.692 ****** 2026-01-17 00:59:01.611876 | orchestrator | skipping: [testbed-node-3] 2026-01-17 00:59:01.611880 | orchestrator | 2026-01-17 00:59:01.611883 | orchestrator | RUNNING HANDLER [ceph-handler : Copy osd restart script] *********************** 2026-01-17 00:59:01.611887 | orchestrator | Saturday 17 January 2026 00:50:58 +0000 (0:00:00.224) 0:03:30.917 ****** 2026-01-17 00:59:01.611891 | orchestrator | skipping: [testbed-node-3] 2026-01-17 00:59:01.611897 | orchestrator | skipping: [testbed-node-4] 2026-01-17 00:59:01.611901 | orchestrator | skipping: [testbed-node-5] 2026-01-17 00:59:01.611905 | orchestrator | 2026-01-17 00:59:01.611919 | orchestrator | RUNNING HANDLER [ceph-handler : Get pool list] ********************************* 2026-01-17 00:59:01.611923 | orchestrator | Saturday 17 January 2026 00:50:58 +0000 (0:00:00.307) 0:03:31.224 ****** 2026-01-17 00:59:01.611926 | orchestrator | skipping: [testbed-node-3] 2026-01-17 00:59:01.611930 | orchestrator | 2026-01-17 00:59:01.611934 | orchestrator | RUNNING HANDLER [ceph-handler : Get balancer module status] ******************** 2026-01-17 00:59:01.611948 | orchestrator | Saturday 17 January 2026 00:50:58 +0000 (0:00:00.218) 0:03:31.443 ****** 2026-01-17 00:59:01.611952 | orchestrator | skipping: [testbed-node-3] 2026-01-17 00:59:01.611956 | orchestrator | 2026-01-17 00:59:01.611959 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact pools_pgautoscaler_mode] ************** 2026-01-17 00:59:01.611963 | orchestrator | Saturday 17 January 2026 00:50:59 +0000 (0:00:00.231) 0:03:31.675 ****** 2026-01-17 00:59:01.611967 | orchestrator | skipping: [testbed-node-3] 2026-01-17 00:59:01.611971 | orchestrator | 2026-01-17 00:59:01.611974 | orchestrator | RUNNING HANDLER [ceph-handler : Disable balancer] ****************************** 2026-01-17 00:59:01.611978 | orchestrator | Saturday 17 January 2026 00:50:59 +0000 (0:00:00.122) 0:03:31.798 ****** 2026-01-17 00:59:01.611982 | orchestrator | skipping: [testbed-node-3] 2026-01-17 00:59:01.611986 | orchestrator | 2026-01-17 00:59:01.611989 | orchestrator | RUNNING HANDLER [ceph-handler : Disable pg autoscale on pools] ***************** 2026-01-17 00:59:01.611993 | orchestrator | Saturday 17 January 2026 00:51:00 +0000 (0:00:00.773) 0:03:32.571 ****** 2026-01-17 00:59:01.611997 | orchestrator | skipping: [testbed-node-3] 2026-01-17 00:59:01.612000 | orchestrator | 2026-01-17 00:59:01.612004 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph osds daemon(s)] ******************* 2026-01-17 00:59:01.612008 | orchestrator | Saturday 17 January 2026 00:51:00 +0000 (0:00:00.244) 0:03:32.815 ****** 2026-01-17 00:59:01.612012 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-01-17 00:59:01.612015 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-01-17 00:59:01.612019 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-01-17 00:59:01.612023 | orchestrator | skipping: [testbed-node-3] 2026-01-17 00:59:01.612027 | orchestrator | 2026-01-17 00:59:01.612030 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called after restart] ********* 2026-01-17 00:59:01.612048 | orchestrator | Saturday 17 January 2026 00:51:00 +0000 (0:00:00.534) 0:03:33.350 ****** 2026-01-17 00:59:01.612052 | orchestrator | skipping: [testbed-node-3] 2026-01-17 00:59:01.612056 | orchestrator | skipping: [testbed-node-4] 2026-01-17 00:59:01.612060 | orchestrator | skipping: [testbed-node-5] 2026-01-17 00:59:01.612064 | orchestrator | 2026-01-17 00:59:01.612067 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable pg autoscale on pools] *************** 2026-01-17 00:59:01.612071 | orchestrator | Saturday 17 January 2026 00:51:01 +0000 (0:00:00.341) 0:03:33.691 ****** 2026-01-17 00:59:01.612075 | orchestrator | skipping: [testbed-node-3] 2026-01-17 00:59:01.612079 | orchestrator | 2026-01-17 00:59:01.612082 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable balancer] **************************** 2026-01-17 00:59:01.612086 | orchestrator | Saturday 17 January 2026 00:51:01 +0000 (0:00:00.213) 0:03:33.905 ****** 2026-01-17 00:59:01.612090 | orchestrator | skipping: [testbed-node-3] 2026-01-17 00:59:01.612094 | orchestrator | 2026-01-17 00:59:01.612097 | orchestrator | RUNNING HANDLER [ceph-handler : Mdss handler] ********************************** 2026-01-17 00:59:01.612101 | orchestrator | Saturday 17 January 2026 00:51:01 +0000 (0:00:00.220) 0:03:34.125 ****** 2026-01-17 00:59:01.612105 | orchestrator | skipping: [testbed-node-0] 2026-01-17 00:59:01.612108 | orchestrator | skipping: [testbed-node-1] 2026-01-17 00:59:01.612112 | orchestrator | skipping: [testbed-node-2] 2026-01-17 00:59:01.612116 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mdss.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-17 00:59:01.612120 | orchestrator | 2026-01-17 00:59:01.612123 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called before restart] ******** 2026-01-17 00:59:01.612130 | orchestrator | Saturday 17 January 2026 00:51:02 +0000 (0:00:01.267) 0:03:35.393 ****** 2026-01-17 00:59:01.612134 | orchestrator | ok: [testbed-node-3] 2026-01-17 00:59:01.612137 | orchestrator | ok: [testbed-node-4] 2026-01-17 00:59:01.612141 | orchestrator | ok: [testbed-node-5] 2026-01-17 00:59:01.612145 | orchestrator | 2026-01-17 00:59:01.612149 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mds restart script] *********************** 2026-01-17 00:59:01.612152 | orchestrator | Saturday 17 January 2026 00:51:03 +0000 (0:00:00.359) 0:03:35.753 ****** 2026-01-17 00:59:01.612156 | orchestrator | changed: [testbed-node-4] 2026-01-17 00:59:01.612160 | orchestrator | changed: [testbed-node-3] 2026-01-17 00:59:01.612164 | orchestrator | changed: [testbed-node-5] 2026-01-17 00:59:01.612167 | orchestrator | 2026-01-17 00:59:01.612171 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mds daemon(s)] ******************** 2026-01-17 00:59:01.612175 | orchestrator | Saturday 17 January 2026 00:51:04 +0000 (0:00:01.368) 0:03:37.121 ****** 2026-01-17 00:59:01.612179 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-01-17 00:59:01.612182 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-01-17 00:59:01.612186 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-01-17 00:59:01.612190 | orchestrator | skipping: [testbed-node-3] 2026-01-17 00:59:01.612193 | orchestrator | 2026-01-17 00:59:01.612197 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called after restart] ********* 2026-01-17 00:59:01.612201 | orchestrator | Saturday 17 January 2026 00:51:05 +0000 (0:00:00.903) 0:03:38.025 ****** 2026-01-17 00:59:01.612205 | orchestrator | ok: [testbed-node-3] 2026-01-17 00:59:01.612208 | orchestrator | ok: [testbed-node-4] 2026-01-17 00:59:01.612212 | orchestrator | ok: [testbed-node-5] 2026-01-17 00:59:01.612216 | orchestrator | 2026-01-17 00:59:01.612220 | orchestrator | RUNNING HANDLER [ceph-handler : Rgws handler] ********************************** 2026-01-17 00:59:01.612223 | orchestrator | Saturday 17 January 2026 00:51:06 +0000 (0:00:00.543) 0:03:38.569 ****** 2026-01-17 00:59:01.612227 | orchestrator | skipping: [testbed-node-0] 2026-01-17 00:59:01.612231 | orchestrator | skipping: [testbed-node-1] 2026-01-17 00:59:01.612235 | orchestrator | skipping: [testbed-node-2] 2026-01-17 00:59:01.612238 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_rgws.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-17 00:59:01.612242 | orchestrator | 2026-01-17 00:59:01.612246 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called before restart] ******** 2026-01-17 00:59:01.612250 | orchestrator | Saturday 17 January 2026 00:51:06 +0000 (0:00:00.874) 0:03:39.443 ****** 2026-01-17 00:59:01.612253 | orchestrator | ok: [testbed-node-3] 2026-01-17 00:59:01.612257 | orchestrator | ok: [testbed-node-4] 2026-01-17 00:59:01.612261 | orchestrator | ok: [testbed-node-5] 2026-01-17 00:59:01.612265 | orchestrator | 2026-01-17 00:59:01.612270 | orchestrator | RUNNING HANDLER [ceph-handler : Copy rgw restart script] *********************** 2026-01-17 00:59:01.612274 | orchestrator | Saturday 17 January 2026 00:51:07 +0000 (0:00:00.580) 0:03:40.023 ****** 2026-01-17 00:59:01.612278 | orchestrator | changed: [testbed-node-3] 2026-01-17 00:59:01.612281 | orchestrator | changed: [testbed-node-4] 2026-01-17 00:59:01.612285 | orchestrator | changed: [testbed-node-5] 2026-01-17 00:59:01.612289 | orchestrator | 2026-01-17 00:59:01.612293 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph rgw daemon(s)] ******************** 2026-01-17 00:59:01.612296 | orchestrator | Saturday 17 January 2026 00:51:08 +0000 (0:00:01.299) 0:03:41.323 ****** 2026-01-17 00:59:01.612300 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-01-17 00:59:01.612304 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-01-17 00:59:01.612308 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-01-17 00:59:01.612312 | orchestrator | skipping: [testbed-node-3] 2026-01-17 00:59:01.612318 | orchestrator | 2026-01-17 00:59:01.612325 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called after restart] ********* 2026-01-17 00:59:01.612334 | orchestrator | Saturday 17 January 2026 00:51:09 +0000 (0:00:00.624) 0:03:41.947 ****** 2026-01-17 00:59:01.612340 | orchestrator | ok: [testbed-node-3] 2026-01-17 00:59:01.612346 | orchestrator | ok: [testbed-node-4] 2026-01-17 00:59:01.612352 | orchestrator | ok: [testbed-node-5] 2026-01-17 00:59:01.612358 | orchestrator | 2026-01-17 00:59:01.612364 | orchestrator | RUNNING HANDLER [ceph-handler : Rbdmirrors handler] **************************** 2026-01-17 00:59:01.612369 | orchestrator | Saturday 17 January 2026 00:51:09 +0000 (0:00:00.333) 0:03:42.281 ****** 2026-01-17 00:59:01.612375 | orchestrator | skipping: [testbed-node-3] 2026-01-17 00:59:01.612381 | orchestrator | skipping: [testbed-node-4] 2026-01-17 00:59:01.612386 | orchestrator | skipping: [testbed-node-5] 2026-01-17 00:59:01.612392 | orchestrator | skipping: [testbed-node-0] 2026-01-17 00:59:01.612398 | orchestrator | skipping: [testbed-node-1] 2026-01-17 00:59:01.612424 | orchestrator | skipping: [testbed-node-2] 2026-01-17 00:59:01.612431 | orchestrator | 2026-01-17 00:59:01.612437 | orchestrator | RUNNING HANDLER [ceph-handler : Mgrs handler] ********************************** 2026-01-17 00:59:01.612443 | orchestrator | Saturday 17 January 2026 00:51:10 +0000 (0:00:00.941) 0:03:43.222 ****** 2026-01-17 00:59:01.612450 | orchestrator | skipping: [testbed-node-3] 2026-01-17 00:59:01.612456 | orchestrator | skipping: [testbed-node-4] 2026-01-17 00:59:01.612463 | orchestrator | skipping: [testbed-node-5] 2026-01-17 00:59:01.612469 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mgrs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-17 00:59:01.612476 | orchestrator | 2026-01-17 00:59:01.612482 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called before restart] ******** 2026-01-17 00:59:01.612487 | orchestrator | Saturday 17 January 2026 00:51:11 +0000 (0:00:00.798) 0:03:44.021 ****** 2026-01-17 00:59:01.612491 | orchestrator | ok: [testbed-node-0] 2026-01-17 00:59:01.612495 | orchestrator | ok: [testbed-node-1] 2026-01-17 00:59:01.612499 | orchestrator | ok: [testbed-node-2] 2026-01-17 00:59:01.612503 | orchestrator | 2026-01-17 00:59:01.612506 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mgr restart script] *********************** 2026-01-17 00:59:01.612510 | orchestrator | Saturday 17 January 2026 00:51:12 +0000 (0:00:00.556) 0:03:44.577 ****** 2026-01-17 00:59:01.612514 | orchestrator | changed: [testbed-node-0] 2026-01-17 00:59:01.612518 | orchestrator | changed: [testbed-node-2] 2026-01-17 00:59:01.612521 | orchestrator | changed: [testbed-node-1] 2026-01-17 00:59:01.612525 | orchestrator | 2026-01-17 00:59:01.612529 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mgr daemon(s)] ******************** 2026-01-17 00:59:01.612533 | orchestrator | Saturday 17 January 2026 00:51:13 +0000 (0:00:01.337) 0:03:45.915 ****** 2026-01-17 00:59:01.612537 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-01-17 00:59:01.612540 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-01-17 00:59:01.612544 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-01-17 00:59:01.612548 | orchestrator | skipping: [testbed-node-0] 2026-01-17 00:59:01.612552 | orchestrator | 2026-01-17 00:59:01.612555 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called after restart] ********* 2026-01-17 00:59:01.612559 | orchestrator | Saturday 17 January 2026 00:51:14 +0000 (0:00:00.622) 0:03:46.538 ****** 2026-01-17 00:59:01.612563 | orchestrator | ok: [testbed-node-0] 2026-01-17 00:59:01.612567 | orchestrator | ok: [testbed-node-1] 2026-01-17 00:59:01.612570 | orchestrator | ok: [testbed-node-2] 2026-01-17 00:59:01.612576 | orchestrator | 2026-01-17 00:59:01.612582 | orchestrator | PLAY [Apply role ceph-mon] ***************************************************** 2026-01-17 00:59:01.612593 | orchestrator | 2026-01-17 00:59:01.612600 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-01-17 00:59:01.612607 | orchestrator | Saturday 17 January 2026 00:51:14 +0000 (0:00:00.696) 0:03:47.234 ****** 2026-01-17 00:59:01.612613 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-17 00:59:01.612619 | orchestrator | 2026-01-17 00:59:01.612625 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-01-17 00:59:01.612637 | orchestrator | Saturday 17 January 2026 00:51:15 +0000 (0:00:00.910) 0:03:48.145 ****** 2026-01-17 00:59:01.612644 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-17 00:59:01.612650 | orchestrator | 2026-01-17 00:59:01.612655 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-01-17 00:59:01.612660 | orchestrator | Saturday 17 January 2026 00:51:16 +0000 (0:00:00.522) 0:03:48.667 ****** 2026-01-17 00:59:01.612666 | orchestrator | ok: [testbed-node-0] 2026-01-17 00:59:01.612673 | orchestrator | ok: [testbed-node-1] 2026-01-17 00:59:01.612679 | orchestrator | ok: [testbed-node-2] 2026-01-17 00:59:01.612686 | orchestrator | 2026-01-17 00:59:01.612693 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-01-17 00:59:01.612700 | orchestrator | Saturday 17 January 2026 00:51:17 +0000 (0:00:01.043) 0:03:49.710 ****** 2026-01-17 00:59:01.612706 | orchestrator | skipping: [testbed-node-0] 2026-01-17 00:59:01.612712 | orchestrator | skipping: [testbed-node-1] 2026-01-17 00:59:01.612719 | orchestrator | skipping: [testbed-node-2] 2026-01-17 00:59:01.612725 | orchestrator | 2026-01-17 00:59:01.612736 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-01-17 00:59:01.612742 | orchestrator | Saturday 17 January 2026 00:51:17 +0000 (0:00:00.351) 0:03:50.062 ****** 2026-01-17 00:59:01.612749 | orchestrator | skipping: [testbed-node-0] 2026-01-17 00:59:01.612756 | orchestrator | skipping: [testbed-node-1] 2026-01-17 00:59:01.612763 | orchestrator | skipping: [testbed-node-2] 2026-01-17 00:59:01.612769 | orchestrator | 2026-01-17 00:59:01.612776 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-01-17 00:59:01.612783 | orchestrator | Saturday 17 January 2026 00:51:17 +0000 (0:00:00.328) 0:03:50.391 ****** 2026-01-17 00:59:01.612789 | orchestrator | skipping: [testbed-node-0] 2026-01-17 00:59:01.612796 | orchestrator | skipping: [testbed-node-1] 2026-01-17 00:59:01.612802 | orchestrator | skipping: [testbed-node-2] 2026-01-17 00:59:01.612820 | orchestrator | 2026-01-17 00:59:01.612828 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-01-17 00:59:01.612834 | orchestrator | Saturday 17 January 2026 00:51:18 +0000 (0:00:00.294) 0:03:50.685 ****** 2026-01-17 00:59:01.612841 | orchestrator | ok: [testbed-node-0] 2026-01-17 00:59:01.612848 | orchestrator | ok: [testbed-node-1] 2026-01-17 00:59:01.612855 | orchestrator | ok: [testbed-node-2] 2026-01-17 00:59:01.612862 | orchestrator | 2026-01-17 00:59:01.612869 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-01-17 00:59:01.612875 | orchestrator | Saturday 17 January 2026 00:51:19 +0000 (0:00:01.166) 0:03:51.852 ****** 2026-01-17 00:59:01.612882 | orchestrator | skipping: [testbed-node-0] 2026-01-17 00:59:01.612887 | orchestrator | skipping: [testbed-node-1] 2026-01-17 00:59:01.612893 | orchestrator | skipping: [testbed-node-2] 2026-01-17 00:59:01.612900 | orchestrator | 2026-01-17 00:59:01.612937 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-01-17 00:59:01.612946 | orchestrator | Saturday 17 January 2026 00:51:19 +0000 (0:00:00.335) 0:03:52.188 ****** 2026-01-17 00:59:01.612982 | orchestrator | skipping: [testbed-node-0] 2026-01-17 00:59:01.612991 | orchestrator | skipping: [testbed-node-1] 2026-01-17 00:59:01.612998 | orchestrator | skipping: [testbed-node-2] 2026-01-17 00:59:01.613004 | orchestrator | 2026-01-17 00:59:01.613012 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-01-17 00:59:01.613018 | orchestrator | Saturday 17 January 2026 00:51:19 +0000 (0:00:00.317) 0:03:52.505 ****** 2026-01-17 00:59:01.613025 | orchestrator | ok: [testbed-node-0] 2026-01-17 00:59:01.613032 | orchestrator | ok: [testbed-node-1] 2026-01-17 00:59:01.613039 | orchestrator | ok: [testbed-node-2] 2026-01-17 00:59:01.613046 | orchestrator | 2026-01-17 00:59:01.613052 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-01-17 00:59:01.613058 | orchestrator | Saturday 17 January 2026 00:51:20 +0000 (0:00:00.794) 0:03:53.300 ****** 2026-01-17 00:59:01.613070 | orchestrator | ok: [testbed-node-0] 2026-01-17 00:59:01.613077 | orchestrator | ok: [testbed-node-1] 2026-01-17 00:59:01.613082 | orchestrator | ok: [testbed-node-2] 2026-01-17 00:59:01.613089 | orchestrator | 2026-01-17 00:59:01.613095 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-01-17 00:59:01.613103 | orchestrator | Saturday 17 January 2026 00:51:21 +0000 (0:00:01.211) 0:03:54.511 ****** 2026-01-17 00:59:01.613109 | orchestrator | skipping: [testbed-node-0] 2026-01-17 00:59:01.613116 | orchestrator | skipping: [testbed-node-1] 2026-01-17 00:59:01.613123 | orchestrator | skipping: [testbed-node-2] 2026-01-17 00:59:01.613130 | orchestrator | 2026-01-17 00:59:01.613136 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-01-17 00:59:01.613143 | orchestrator | Saturday 17 January 2026 00:51:22 +0000 (0:00:00.317) 0:03:54.829 ****** 2026-01-17 00:59:01.613149 | orchestrator | ok: [testbed-node-0] 2026-01-17 00:59:01.613156 | orchestrator | ok: [testbed-node-1] 2026-01-17 00:59:01.613162 | orchestrator | ok: [testbed-node-2] 2026-01-17 00:59:01.613169 | orchestrator | 2026-01-17 00:59:01.613176 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-01-17 00:59:01.613183 | orchestrator | Saturday 17 January 2026 00:51:22 +0000 (0:00:00.348) 0:03:55.177 ****** 2026-01-17 00:59:01.613190 | orchestrator | skipping: [testbed-node-0] 2026-01-17 00:59:01.613197 | orchestrator | skipping: [testbed-node-1] 2026-01-17 00:59:01.613203 | orchestrator | skipping: [testbed-node-2] 2026-01-17 00:59:01.613210 | orchestrator | 2026-01-17 00:59:01.613217 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-01-17 00:59:01.613224 | orchestrator | Saturday 17 January 2026 00:51:23 +0000 (0:00:00.359) 0:03:55.537 ****** 2026-01-17 00:59:01.613230 | orchestrator | skipping: [testbed-node-0] 2026-01-17 00:59:01.613236 | orchestrator | skipping: [testbed-node-1] 2026-01-17 00:59:01.613242 | orchestrator | skipping: [testbed-node-2] 2026-01-17 00:59:01.613249 | orchestrator | 2026-01-17 00:59:01.613255 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-01-17 00:59:01.613262 | orchestrator | Saturday 17 January 2026 00:51:23 +0000 (0:00:00.341) 0:03:55.879 ****** 2026-01-17 00:59:01.613268 | orchestrator | skipping: [testbed-node-0] 2026-01-17 00:59:01.613274 | orchestrator | skipping: [testbed-node-1] 2026-01-17 00:59:01.613280 | orchestrator | skipping: [testbed-node-2] 2026-01-17 00:59:01.613287 | orchestrator | 2026-01-17 00:59:01.613293 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-01-17 00:59:01.613300 | orchestrator | Saturday 17 January 2026 00:51:23 +0000 (0:00:00.606) 0:03:56.486 ****** 2026-01-17 00:59:01.613306 | orchestrator | skipping: [testbed-node-0] 2026-01-17 00:59:01.613313 | orchestrator | skipping: [testbed-node-1] 2026-01-17 00:59:01.613319 | orchestrator | skipping: [testbed-node-2] 2026-01-17 00:59:01.613326 | orchestrator | 2026-01-17 00:59:01.613332 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-01-17 00:59:01.613339 | orchestrator | Saturday 17 January 2026 00:51:24 +0000 (0:00:00.382) 0:03:56.868 ****** 2026-01-17 00:59:01.613345 | orchestrator | skipping: [testbed-node-0] 2026-01-17 00:59:01.613352 | orchestrator | skipping: [testbed-node-1] 2026-01-17 00:59:01.613358 | orchestrator | skipping: [testbed-node-2] 2026-01-17 00:59:01.613364 | orchestrator | 2026-01-17 00:59:01.613370 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-01-17 00:59:01.613376 | orchestrator | Saturday 17 January 2026 00:51:24 +0000 (0:00:00.533) 0:03:57.401 ****** 2026-01-17 00:59:01.613385 | orchestrator | ok: [testbed-node-0] 2026-01-17 00:59:01.613391 | orchestrator | ok: [testbed-node-1] 2026-01-17 00:59:01.613396 | orchestrator | ok: [testbed-node-2] 2026-01-17 00:59:01.613402 | orchestrator | 2026-01-17 00:59:01.613407 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-01-17 00:59:01.613417 | orchestrator | Saturday 17 January 2026 00:51:25 +0000 (0:00:00.550) 0:03:57.952 ****** 2026-01-17 00:59:01.613423 | orchestrator | ok: [testbed-node-0] 2026-01-17 00:59:01.613434 | orchestrator | ok: [testbed-node-1] 2026-01-17 00:59:01.613441 | orchestrator | ok: [testbed-node-2] 2026-01-17 00:59:01.613447 | orchestrator | 2026-01-17 00:59:01.613454 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-01-17 00:59:01.613461 | orchestrator | Saturday 17 January 2026 00:51:26 +0000 (0:00:00.948) 0:03:58.901 ****** 2026-01-17 00:59:01.613467 | orchestrator | ok: [testbed-node-0] 2026-01-17 00:59:01.613474 | orchestrator | ok: [testbed-node-1] 2026-01-17 00:59:01.613481 | orchestrator | ok: [testbed-node-2] 2026-01-17 00:59:01.613487 | orchestrator | 2026-01-17 00:59:01.613493 | orchestrator | TASK [ceph-mon : Set_fact container_exec_cmd] ********************************** 2026-01-17 00:59:01.613499 | orchestrator | Saturday 17 January 2026 00:51:27 +0000 (0:00:00.655) 0:03:59.557 ****** 2026-01-17 00:59:01.613505 | orchestrator | ok: [testbed-node-0] 2026-01-17 00:59:01.613511 | orchestrator | ok: [testbed-node-1] 2026-01-17 00:59:01.613517 | orchestrator | ok: [testbed-node-2] 2026-01-17 00:59:01.613523 | orchestrator | 2026-01-17 00:59:01.613529 | orchestrator | TASK [ceph-mon : Include deploy_monitors.yml] ********************************** 2026-01-17 00:59:01.613536 | orchestrator | Saturday 17 January 2026 00:51:27 +0000 (0:00:00.410) 0:03:59.967 ****** 2026-01-17 00:59:01.613543 | orchestrator | included: /ansible/roles/ceph-mon/tasks/deploy_monitors.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-17 00:59:01.613549 | orchestrator | 2026-01-17 00:59:01.613556 | orchestrator | TASK [ceph-mon : Check if monitor initial keyring already exists] ************** 2026-01-17 00:59:01.613563 | orchestrator | Saturday 17 January 2026 00:51:28 +0000 (0:00:01.055) 0:04:01.023 ****** 2026-01-17 00:59:01.613569 | orchestrator | skipping: [testbed-node-0] 2026-01-17 00:59:01.613576 | orchestrator | 2026-01-17 00:59:01.613606 | orchestrator | TASK [ceph-mon : Generate monitor initial keyring] ***************************** 2026-01-17 00:59:01.613615 | orchestrator | Saturday 17 January 2026 00:51:28 +0000 (0:00:00.118) 0:04:01.142 ****** 2026-01-17 00:59:01.613622 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-01-17 00:59:01.613628 | orchestrator | 2026-01-17 00:59:01.613635 | orchestrator | TASK [ceph-mon : Set_fact _initial_mon_key_success] **************************** 2026-01-17 00:59:01.613642 | orchestrator | Saturday 17 January 2026 00:51:29 +0000 (0:00:00.717) 0:04:01.859 ****** 2026-01-17 00:59:01.613648 | orchestrator | ok: [testbed-node-0] 2026-01-17 00:59:01.613655 | orchestrator | ok: [testbed-node-1] 2026-01-17 00:59:01.613662 | orchestrator | ok: [testbed-node-2] 2026-01-17 00:59:01.613668 | orchestrator | 2026-01-17 00:59:01.613674 | orchestrator | TASK [ceph-mon : Get initial keyring when it already exists] ******************* 2026-01-17 00:59:01.613680 | orchestrator | Saturday 17 January 2026 00:51:29 +0000 (0:00:00.285) 0:04:02.145 ****** 2026-01-17 00:59:01.613687 | orchestrator | ok: [testbed-node-0] 2026-01-17 00:59:01.613694 | orchestrator | ok: [testbed-node-1] 2026-01-17 00:59:01.613700 | orchestrator | ok: [testbed-node-2] 2026-01-17 00:59:01.613707 | orchestrator | 2026-01-17 00:59:01.613714 | orchestrator | TASK [ceph-mon : Create monitor initial keyring] ******************************* 2026-01-17 00:59:01.613721 | orchestrator | Saturday 17 January 2026 00:51:30 +0000 (0:00:00.445) 0:04:02.590 ****** 2026-01-17 00:59:01.613727 | orchestrator | changed: [testbed-node-0] 2026-01-17 00:59:01.613734 | orchestrator | changed: [testbed-node-1] 2026-01-17 00:59:01.613741 | orchestrator | changed: [testbed-node-2] 2026-01-17 00:59:01.613747 | orchestrator | 2026-01-17 00:59:01.613754 | orchestrator | TASK [ceph-mon : Copy the initial key in /etc/ceph (for containers)] *********** 2026-01-17 00:59:01.613761 | orchestrator | Saturday 17 January 2026 00:51:31 +0000 (0:00:01.201) 0:04:03.792 ****** 2026-01-17 00:59:01.613768 | orchestrator | changed: [testbed-node-0] 2026-01-17 00:59:01.613774 | orchestrator | changed: [testbed-node-1] 2026-01-17 00:59:01.613781 | orchestrator | changed: [testbed-node-2] 2026-01-17 00:59:01.613788 | orchestrator | 2026-01-17 00:59:01.613795 | orchestrator | TASK [ceph-mon : Create monitor directory] ************************************* 2026-01-17 00:59:01.613801 | orchestrator | Saturday 17 January 2026 00:51:32 +0000 (0:00:01.015) 0:04:04.808 ****** 2026-01-17 00:59:01.613808 | orchestrator | changed: [testbed-node-0] 2026-01-17 00:59:01.613823 | orchestrator | changed: [testbed-node-1] 2026-01-17 00:59:01.613829 | orchestrator | changed: [testbed-node-2] 2026-01-17 00:59:01.613836 | orchestrator | 2026-01-17 00:59:01.613843 | orchestrator | TASK [ceph-mon : Recursively fix ownership of monitor directory] *************** 2026-01-17 00:59:01.613850 | orchestrator | Saturday 17 January 2026 00:51:33 +0000 (0:00:00.779) 0:04:05.587 ****** 2026-01-17 00:59:01.613857 | orchestrator | ok: [testbed-node-0] 2026-01-17 00:59:01.613864 | orchestrator | ok: [testbed-node-1] 2026-01-17 00:59:01.613870 | orchestrator | ok: [testbed-node-2] 2026-01-17 00:59:01.613877 | orchestrator | 2026-01-17 00:59:01.613884 | orchestrator | TASK [ceph-mon : Create admin keyring] ***************************************** 2026-01-17 00:59:01.613891 | orchestrator | Saturday 17 January 2026 00:51:33 +0000 (0:00:00.674) 0:04:06.262 ****** 2026-01-17 00:59:01.613898 | orchestrator | changed: [testbed-node-0] 2026-01-17 00:59:01.613904 | orchestrator | 2026-01-17 00:59:01.613924 | orchestrator | TASK [ceph-mon : Slurp admin keyring] ****************************************** 2026-01-17 00:59:01.613931 | orchestrator | Saturday 17 January 2026 00:51:35 +0000 (0:00:01.816) 0:04:08.078 ****** 2026-01-17 00:59:01.613938 | orchestrator | ok: [testbed-node-0] 2026-01-17 00:59:01.613945 | orchestrator | 2026-01-17 00:59:01.613952 | orchestrator | TASK [ceph-mon : Copy admin keyring over to mons] ****************************** 2026-01-17 00:59:01.613958 | orchestrator | Saturday 17 January 2026 00:51:36 +0000 (0:00:00.735) 0:04:08.814 ****** 2026-01-17 00:59:01.613965 | orchestrator | changed: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-17 00:59:01.613972 | orchestrator | ok: [testbed-node-0] => (item=None) 2026-01-17 00:59:01.613979 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-17 00:59:01.613986 | orchestrator | changed: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-01-17 00:59:01.613993 | orchestrator | ok: [testbed-node-1] => (item=None) 2026-01-17 00:59:01.614000 | orchestrator | ok: [testbed-node-2 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-01-17 00:59:01.614007 | orchestrator | changed: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-01-17 00:59:01.614044 | orchestrator | changed: [testbed-node-0 -> {{ item }}] 2026-01-17 00:59:01.614053 | orchestrator | ok: [testbed-node-1 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-01-17 00:59:01.614060 | orchestrator | changed: [testbed-node-1 -> {{ item }}] 2026-01-17 00:59:01.614067 | orchestrator | ok: [testbed-node-2] => (item=None) 2026-01-17 00:59:01.614074 | orchestrator | ok: [testbed-node-2 -> {{ item }}] 2026-01-17 00:59:01.614081 | orchestrator | 2026-01-17 00:59:01.614089 | orchestrator | TASK [ceph-mon : Import admin keyring into mon keyring] ************************ 2026-01-17 00:59:01.614096 | orchestrator | Saturday 17 January 2026 00:51:39 +0000 (0:00:03.428) 0:04:12.243 ****** 2026-01-17 00:59:01.614103 | orchestrator | changed: [testbed-node-0] 2026-01-17 00:59:01.614110 | orchestrator | changed: [testbed-node-1] 2026-01-17 00:59:01.614118 | orchestrator | changed: [testbed-node-2] 2026-01-17 00:59:01.614125 | orchestrator | 2026-01-17 00:59:01.614132 | orchestrator | TASK [ceph-mon : Set_fact ceph-mon container command] ************************** 2026-01-17 00:59:01.614139 | orchestrator | Saturday 17 January 2026 00:51:41 +0000 (0:00:01.296) 0:04:13.540 ****** 2026-01-17 00:59:01.614147 | orchestrator | ok: [testbed-node-0] 2026-01-17 00:59:01.614154 | orchestrator | ok: [testbed-node-1] 2026-01-17 00:59:01.614162 | orchestrator | ok: [testbed-node-2] 2026-01-17 00:59:01.614169 | orchestrator | 2026-01-17 00:59:01.614176 | orchestrator | TASK [ceph-mon : Set_fact monmaptool container command] ************************ 2026-01-17 00:59:01.614184 | orchestrator | Saturday 17 January 2026 00:51:41 +0000 (0:00:00.478) 0:04:14.018 ****** 2026-01-17 00:59:01.614191 | orchestrator | ok: [testbed-node-0] 2026-01-17 00:59:01.614198 | orchestrator | ok: [testbed-node-1] 2026-01-17 00:59:01.614205 | orchestrator | ok: [testbed-node-2] 2026-01-17 00:59:01.614213 | orchestrator | 2026-01-17 00:59:01.614220 | orchestrator | TASK [ceph-mon : Generate initial monmap] ************************************** 2026-01-17 00:59:01.614227 | orchestrator | Saturday 17 January 2026 00:51:42 +0000 (0:00:00.667) 0:04:14.686 ****** 2026-01-17 00:59:01.614259 | orchestrator | changed: [testbed-node-1] 2026-01-17 00:59:01.614267 | orchestrator | changed: [testbed-node-0] 2026-01-17 00:59:01.614274 | orchestrator | changed: [testbed-node-2] 2026-01-17 00:59:01.614281 | orchestrator | 2026-01-17 00:59:01.614288 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs with keyring] ******************************* 2026-01-17 00:59:01.614295 | orchestrator | Saturday 17 January 2026 00:51:44 +0000 (0:00:02.360) 0:04:17.046 ****** 2026-01-17 00:59:01.614302 | orchestrator | changed: [testbed-node-0] 2026-01-17 00:59:01.614309 | orchestrator | changed: [testbed-node-1] 2026-01-17 00:59:01.614316 | orchestrator | changed: [testbed-node-2] 2026-01-17 00:59:01.614323 | orchestrator | 2026-01-17 00:59:01.614330 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs without keyring] **************************** 2026-01-17 00:59:01.614337 | orchestrator | Saturday 17 January 2026 00:51:45 +0000 (0:00:01.373) 0:04:18.419 ****** 2026-01-17 00:59:01.614344 | orchestrator | skipping: [testbed-node-0] 2026-01-17 00:59:01.614351 | orchestrator | skipping: [testbed-node-1] 2026-01-17 00:59:01.614358 | orchestrator | skipping: [testbed-node-2] 2026-01-17 00:59:01.614364 | orchestrator | 2026-01-17 00:59:01.614370 | orchestrator | TASK [ceph-mon : Include start_monitor.yml] ************************************ 2026-01-17 00:59:01.614376 | orchestrator | Saturday 17 January 2026 00:51:46 +0000 (0:00:00.366) 0:04:18.785 ****** 2026-01-17 00:59:01.614383 | orchestrator | included: /ansible/roles/ceph-mon/tasks/start_monitor.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-17 00:59:01.614390 | orchestrator | 2026-01-17 00:59:01.614397 | orchestrator | TASK [ceph-mon : Ensure systemd service override directory exists] ************* 2026-01-17 00:59:01.614404 | orchestrator | Saturday 17 January 2026 00:51:46 +0000 (0:00:00.618) 0:04:19.404 ****** 2026-01-17 00:59:01.614410 | orchestrator | skipping: [testbed-node-0] 2026-01-17 00:59:01.614417 | orchestrator | skipping: [testbed-node-1] 2026-01-17 00:59:01.614424 | orchestrator | skipping: [testbed-node-2] 2026-01-17 00:59:01.614431 | orchestrator | 2026-01-17 00:59:01.614438 | orchestrator | TASK [ceph-mon : Add ceph-mon systemd service overrides] *********************** 2026-01-17 00:59:01.614445 | orchestrator | Saturday 17 January 2026 00:51:47 +0000 (0:00:00.258) 0:04:19.662 ****** 2026-01-17 00:59:01.614452 | orchestrator | skipping: [testbed-node-0] 2026-01-17 00:59:01.614459 | orchestrator | skipping: [testbed-node-1] 2026-01-17 00:59:01.614465 | orchestrator | skipping: [testbed-node-2] 2026-01-17 00:59:01.614472 | orchestrator | 2026-01-17 00:59:01.614479 | orchestrator | TASK [ceph-mon : Include_tasks systemd.yml] ************************************ 2026-01-17 00:59:01.614486 | orchestrator | Saturday 17 January 2026 00:51:47 +0000 (0:00:00.257) 0:04:19.920 ****** 2026-01-17 00:59:01.614492 | orchestrator | included: /ansible/roles/ceph-mon/tasks/systemd.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-17 00:59:01.614499 | orchestrator | 2026-01-17 00:59:01.614506 | orchestrator | TASK [ceph-mon : Generate systemd unit file for mon container] ***************** 2026-01-17 00:59:01.614512 | orchestrator | Saturday 17 January 2026 00:51:48 +0000 (0:00:01.382) 0:04:21.302 ****** 2026-01-17 00:59:01.614518 | orchestrator | changed: [testbed-node-0] 2026-01-17 00:59:01.614524 | orchestrator | changed: [testbed-node-2] 2026-01-17 00:59:01.614530 | orchestrator | changed: [testbed-node-1] 2026-01-17 00:59:01.614537 | orchestrator | 2026-01-17 00:59:01.614544 | orchestrator | TASK [ceph-mon : Generate systemd ceph-mon target file] ************************ 2026-01-17 00:59:01.614550 | orchestrator | Saturday 17 January 2026 00:51:50 +0000 (0:00:01.972) 0:04:23.274 ****** 2026-01-17 00:59:01.614556 | orchestrator | changed: [testbed-node-0] 2026-01-17 00:59:01.614563 | orchestrator | changed: [testbed-node-1] 2026-01-17 00:59:01.614570 | orchestrator | changed: [testbed-node-2] 2026-01-17 00:59:01.614576 | orchestrator | 2026-01-17 00:59:01.614582 | orchestrator | TASK [ceph-mon : Enable ceph-mon.target] *************************************** 2026-01-17 00:59:01.614588 | orchestrator | Saturday 17 January 2026 00:51:52 +0000 (0:00:01.788) 0:04:25.063 ****** 2026-01-17 00:59:01.614595 | orchestrator | changed: [testbed-node-0] 2026-01-17 00:59:01.614601 | orchestrator | changed: [testbed-node-1] 2026-01-17 00:59:01.614612 | orchestrator | changed: [testbed-node-2] 2026-01-17 00:59:01.614619 | orchestrator | 2026-01-17 00:59:01.614625 | orchestrator | TASK [ceph-mon : Start the monitor service] ************************************ 2026-01-17 00:59:01.614631 | orchestrator | Saturday 17 January 2026 00:51:54 +0000 (0:00:02.006) 0:04:27.070 ****** 2026-01-17 00:59:01.614637 | orchestrator | changed: [testbed-node-0] 2026-01-17 00:59:01.614644 | orchestrator | changed: [testbed-node-1] 2026-01-17 00:59:01.614650 | orchestrator | changed: [testbed-node-2] 2026-01-17 00:59:01.614657 | orchestrator | 2026-01-17 00:59:01.614666 | orchestrator | TASK [ceph-mon : Include_tasks ceph_keys.yml] ********************************** 2026-01-17 00:59:01.614674 | orchestrator | Saturday 17 January 2026 00:51:56 +0000 (0:00:02.388) 0:04:29.458 ****** 2026-01-17 00:59:01.614680 | orchestrator | included: /ansible/roles/ceph-mon/tasks/ceph_keys.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-17 00:59:01.614687 | orchestrator | 2026-01-17 00:59:01.614693 | orchestrator | TASK [ceph-mon : Waiting for the monitor(s) to form the quorum...] ************* 2026-01-17 00:59:01.614700 | orchestrator | Saturday 17 January 2026 00:51:57 +0000 (0:00:00.571) 0:04:30.029 ****** 2026-01-17 00:59:01.614707 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for the monitor(s) to form the quorum... (10 retries left). 2026-01-17 00:59:01.614713 | orchestrator | ok: [testbed-node-0] 2026-01-17 00:59:01.614720 | orchestrator | 2026-01-17 00:59:01.614726 | orchestrator | TASK [ceph-mon : Fetch ceph initial keys] ************************************** 2026-01-17 00:59:01.614733 | orchestrator | Saturday 17 January 2026 00:52:19 +0000 (0:00:21.807) 0:04:51.836 ****** 2026-01-17 00:59:01.614739 | orchestrator | ok: [testbed-node-1] 2026-01-17 00:59:01.614746 | orchestrator | ok: [testbed-node-0] 2026-01-17 00:59:01.614752 | orchestrator | ok: [testbed-node-2] 2026-01-17 00:59:01.614759 | orchestrator | 2026-01-17 00:59:01.614765 | orchestrator | TASK [ceph-mon : Include secure_cluster.yml] *********************************** 2026-01-17 00:59:01.614772 | orchestrator | Saturday 17 January 2026 00:52:28 +0000 (0:00:09.134) 0:05:00.970 ****** 2026-01-17 00:59:01.614778 | orchestrator | skipping: [testbed-node-0] 2026-01-17 00:59:01.614785 | orchestrator | skipping: [testbed-node-1] 2026-01-17 00:59:01.614791 | orchestrator | skipping: [testbed-node-2] 2026-01-17 00:59:01.614798 | orchestrator | 2026-01-17 00:59:01.614804 | orchestrator | TASK [ceph-mon : Set cluster configs] ****************************************** 2026-01-17 00:59:01.614833 | orchestrator | Saturday 17 January 2026 00:52:28 +0000 (0:00:00.539) 0:05:01.510 ****** 2026-01-17 00:59:01.614842 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__b7da9dc892467d914678a0e472b33c92c215a055'}}, {'key': 'public_network', 'value': '192.168.16.0/20'}]) 2026-01-17 00:59:01.614850 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__b7da9dc892467d914678a0e472b33c92c215a055'}}, {'key': 'cluster_network', 'value': '192.168.16.0/20'}]) 2026-01-17 00:59:01.614856 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__b7da9dc892467d914678a0e472b33c92c215a055'}}, {'key': 'osd_pool_default_crush_rule', 'value': -1}]) 2026-01-17 00:59:01.614864 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__b7da9dc892467d914678a0e472b33c92c215a055'}}, {'key': 'ms_bind_ipv6', 'value': 'False'}]) 2026-01-17 00:59:01.614875 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__b7da9dc892467d914678a0e472b33c92c215a055'}}, {'key': 'ms_bind_ipv4', 'value': 'True'}]) 2026-01-17 00:59:01.614882 | orchestrator | skipping: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__b7da9dc892467d914678a0e472b33c92c215a055'}}, {'key': 'osd_crush_chooseleaf_type', 'value': '__omit_place_holder__b7da9dc892467d914678a0e472b33c92c215a055'}])  2026-01-17 00:59:01.614890 | orchestrator | 2026-01-17 00:59:01.614896 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-01-17 00:59:01.614903 | orchestrator | Saturday 17 January 2026 00:52:43 +0000 (0:00:14.696) 0:05:16.206 ****** 2026-01-17 00:59:01.614937 | orchestrator | skipping: [testbed-node-0] 2026-01-17 00:59:01.614944 | orchestrator | skipping: [testbed-node-1] 2026-01-17 00:59:01.614950 | orchestrator | skipping: [testbed-node-2] 2026-01-17 00:59:01.614956 | orchestrator | 2026-01-17 00:59:01.614962 | orchestrator | RUNNING HANDLER [ceph-handler : Mons handler] ********************************** 2026-01-17 00:59:01.614968 | orchestrator | Saturday 17 January 2026 00:52:44 +0000 (0:00:00.337) 0:05:16.544 ****** 2026-01-17 00:59:01.614980 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mons.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-17 00:59:01.614986 | orchestrator | 2026-01-17 00:59:01.614993 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called before restart] ******** 2026-01-17 00:59:01.614999 | orchestrator | Saturday 17 January 2026 00:52:44 +0000 (0:00:00.794) 0:05:17.338 ****** 2026-01-17 00:59:01.615006 | orchestrator | ok: [testbed-node-0] 2026-01-17 00:59:01.615013 | orchestrator | ok: [testbed-node-1] 2026-01-17 00:59:01.615019 | orchestrator | ok: [testbed-node-2] 2026-01-17 00:59:01.615026 | orchestrator | 2026-01-17 00:59:01.615032 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mon restart script] *********************** 2026-01-17 00:59:01.615038 | orchestrator | Saturday 17 January 2026 00:52:45 +0000 (0:00:00.351) 0:05:17.689 ****** 2026-01-17 00:59:01.615044 | orchestrator | skipping: [testbed-node-0] 2026-01-17 00:59:01.615051 | orchestrator | skipping: [testbed-node-1] 2026-01-17 00:59:01.615058 | orchestrator | skipping: [testbed-node-2] 2026-01-17 00:59:01.615064 | orchestrator | 2026-01-17 00:59:01.615071 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mon daemon(s)] ******************** 2026-01-17 00:59:01.615077 | orchestrator | Saturday 17 January 2026 00:52:45 +0000 (0:00:00.355) 0:05:18.045 ****** 2026-01-17 00:59:01.615084 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-01-17 00:59:01.615090 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-01-17 00:59:01.615097 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-01-17 00:59:01.615103 | orchestrator | skipping: [testbed-node-0] 2026-01-17 00:59:01.615110 | orchestrator | 2026-01-17 00:59:01.615117 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called after restart] ********* 2026-01-17 00:59:01.615124 | orchestrator | Saturday 17 January 2026 00:52:46 +0000 (0:00:01.212) 0:05:19.258 ****** 2026-01-17 00:59:01.615130 | orchestrator | ok: [testbed-node-0] 2026-01-17 00:59:01.615157 | orchestrator | ok: [testbed-node-1] 2026-01-17 00:59:01.615164 | orchestrator | ok: [testbed-node-2] 2026-01-17 00:59:01.615170 | orchestrator | 2026-01-17 00:59:01.615177 | orchestrator | PLAY [Apply role ceph-mgr] ***************************************************** 2026-01-17 00:59:01.615184 | orchestrator | 2026-01-17 00:59:01.615190 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-01-17 00:59:01.615197 | orchestrator | Saturday 17 January 2026 00:52:47 +0000 (0:00:00.570) 0:05:19.828 ****** 2026-01-17 00:59:01.615209 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-17 00:59:01.615216 | orchestrator | 2026-01-17 00:59:01.615222 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-01-17 00:59:01.615229 | orchestrator | Saturday 17 January 2026 00:52:47 +0000 (0:00:00.499) 0:05:20.328 ****** 2026-01-17 00:59:01.615235 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-17 00:59:01.615242 | orchestrator | 2026-01-17 00:59:01.615248 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-01-17 00:59:01.615255 | orchestrator | Saturday 17 January 2026 00:52:48 +0000 (0:00:00.764) 0:05:21.092 ****** 2026-01-17 00:59:01.615262 | orchestrator | ok: [testbed-node-0] 2026-01-17 00:59:01.615269 | orchestrator | ok: [testbed-node-1] 2026-01-17 00:59:01.615275 | orchestrator | ok: [testbed-node-2] 2026-01-17 00:59:01.615282 | orchestrator | 2026-01-17 00:59:01.615288 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-01-17 00:59:01.615295 | orchestrator | Saturday 17 January 2026 00:52:49 +0000 (0:00:00.679) 0:05:21.771 ****** 2026-01-17 00:59:01.615301 | orchestrator | skipping: [testbed-node-0] 2026-01-17 00:59:01.615308 | orchestrator | skipping: [testbed-node-1] 2026-01-17 00:59:01.615314 | orchestrator | skipping: [testbed-node-2] 2026-01-17 00:59:01.615321 | orchestrator | 2026-01-17 00:59:01.615327 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-01-17 00:59:01.615334 | orchestrator | Saturday 17 January 2026 00:52:49 +0000 (0:00:00.322) 0:05:22.094 ****** 2026-01-17 00:59:01.615340 | orchestrator | skipping: [testbed-node-0] 2026-01-17 00:59:01.615347 | orchestrator | skipping: [testbed-node-1] 2026-01-17 00:59:01.615354 | orchestrator | skipping: [testbed-node-2] 2026-01-17 00:59:01.615360 | orchestrator | 2026-01-17 00:59:01.615367 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-01-17 00:59:01.615373 | orchestrator | Saturday 17 January 2026 00:52:50 +0000 (0:00:00.554) 0:05:22.649 ****** 2026-01-17 00:59:01.615380 | orchestrator | skipping: [testbed-node-0] 2026-01-17 00:59:01.615386 | orchestrator | skipping: [testbed-node-1] 2026-01-17 00:59:01.615393 | orchestrator | skipping: [testbed-node-2] 2026-01-17 00:59:01.615399 | orchestrator | 2026-01-17 00:59:01.615406 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-01-17 00:59:01.615413 | orchestrator | Saturday 17 January 2026 00:52:50 +0000 (0:00:00.404) 0:05:23.054 ****** 2026-01-17 00:59:01.615419 | orchestrator | ok: [testbed-node-0] 2026-01-17 00:59:01.615426 | orchestrator | ok: [testbed-node-1] 2026-01-17 00:59:01.615432 | orchestrator | ok: [testbed-node-2] 2026-01-17 00:59:01.615439 | orchestrator | 2026-01-17 00:59:01.615445 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-01-17 00:59:01.615452 | orchestrator | Saturday 17 January 2026 00:52:51 +0000 (0:00:00.779) 0:05:23.833 ****** 2026-01-17 00:59:01.615457 | orchestrator | skipping: [testbed-node-0] 2026-01-17 00:59:01.615463 | orchestrator | skipping: [testbed-node-1] 2026-01-17 00:59:01.615469 | orchestrator | skipping: [testbed-node-2] 2026-01-17 00:59:01.615475 | orchestrator | 2026-01-17 00:59:01.615481 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-01-17 00:59:01.615487 | orchestrator | Saturday 17 January 2026 00:52:51 +0000 (0:00:00.374) 0:05:24.207 ****** 2026-01-17 00:59:01.615492 | orchestrator | skipping: [testbed-node-0] 2026-01-17 00:59:01.615499 | orchestrator | skipping: [testbed-node-1] 2026-01-17 00:59:01.615505 | orchestrator | skipping: [testbed-node-2] 2026-01-17 00:59:01.615511 | orchestrator | 2026-01-17 00:59:01.615517 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-01-17 00:59:01.615523 | orchestrator | Saturday 17 January 2026 00:52:52 +0000 (0:00:00.651) 0:05:24.858 ****** 2026-01-17 00:59:01.615529 | orchestrator | ok: [testbed-node-0] 2026-01-17 00:59:01.615535 | orchestrator | ok: [testbed-node-2] 2026-01-17 00:59:01.615545 | orchestrator | ok: [testbed-node-1] 2026-01-17 00:59:01.615556 | orchestrator | 2026-01-17 00:59:01.615563 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-01-17 00:59:01.615569 | orchestrator | Saturday 17 January 2026 00:52:53 +0000 (0:00:00.876) 0:05:25.734 ****** 2026-01-17 00:59:01.615576 | orchestrator | ok: [testbed-node-0] 2026-01-17 00:59:01.615583 | orchestrator | ok: [testbed-node-1] 2026-01-17 00:59:01.615589 | orchestrator | ok: [testbed-node-2] 2026-01-17 00:59:01.615595 | orchestrator | 2026-01-17 00:59:01.615602 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-01-17 00:59:01.615607 | orchestrator | Saturday 17 January 2026 00:52:53 +0000 (0:00:00.784) 0:05:26.518 ****** 2026-01-17 00:59:01.615614 | orchestrator | skipping: [testbed-node-0] 2026-01-17 00:59:01.615620 | orchestrator | skipping: [testbed-node-1] 2026-01-17 00:59:01.615626 | orchestrator | skipping: [testbed-node-2] 2026-01-17 00:59:01.615633 | orchestrator | 2026-01-17 00:59:01.615639 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-01-17 00:59:01.615646 | orchestrator | Saturday 17 January 2026 00:52:54 +0000 (0:00:00.301) 0:05:26.820 ****** 2026-01-17 00:59:01.615652 | orchestrator | ok: [testbed-node-0] 2026-01-17 00:59:01.615659 | orchestrator | ok: [testbed-node-1] 2026-01-17 00:59:01.615665 | orchestrator | ok: [testbed-node-2] 2026-01-17 00:59:01.615671 | orchestrator | 2026-01-17 00:59:01.615677 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-01-17 00:59:01.615683 | orchestrator | Saturday 17 January 2026 00:52:54 +0000 (0:00:00.637) 0:05:27.458 ****** 2026-01-17 00:59:01.615690 | orchestrator | skipping: [testbed-node-0] 2026-01-17 00:59:01.615696 | orchestrator | skipping: [testbed-node-1] 2026-01-17 00:59:01.615703 | orchestrator | skipping: [testbed-node-2] 2026-01-17 00:59:01.615709 | orchestrator | 2026-01-17 00:59:01.615715 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-01-17 00:59:01.615748 | orchestrator | Saturday 17 January 2026 00:52:55 +0000 (0:00:00.369) 0:05:27.827 ****** 2026-01-17 00:59:01.615755 | orchestrator | skipping: [testbed-node-0] 2026-01-17 00:59:01.615761 | orchestrator | skipping: [testbed-node-1] 2026-01-17 00:59:01.615767 | orchestrator | skipping: [testbed-node-2] 2026-01-17 00:59:01.615773 | orchestrator | 2026-01-17 00:59:01.615779 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-01-17 00:59:01.615786 | orchestrator | Saturday 17 January 2026 00:52:55 +0000 (0:00:00.346) 0:05:28.173 ****** 2026-01-17 00:59:01.615792 | orchestrator | skipping: [testbed-node-0] 2026-01-17 00:59:01.615798 | orchestrator | skipping: [testbed-node-1] 2026-01-17 00:59:01.615805 | orchestrator | skipping: [testbed-node-2] 2026-01-17 00:59:01.615812 | orchestrator | 2026-01-17 00:59:01.615819 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-01-17 00:59:01.615825 | orchestrator | Saturday 17 January 2026 00:52:55 +0000 (0:00:00.312) 0:05:28.486 ****** 2026-01-17 00:59:01.615832 | orchestrator | skipping: [testbed-node-0] 2026-01-17 00:59:01.615838 | orchestrator | skipping: [testbed-node-1] 2026-01-17 00:59:01.615844 | orchestrator | skipping: [testbed-node-2] 2026-01-17 00:59:01.615851 | orchestrator | 2026-01-17 00:59:01.615857 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-01-17 00:59:01.615863 | orchestrator | Saturday 17 January 2026 00:52:56 +0000 (0:00:00.384) 0:05:28.871 ****** 2026-01-17 00:59:01.615869 | orchestrator | skipping: [testbed-node-0] 2026-01-17 00:59:01.615875 | orchestrator | skipping: [testbed-node-1] 2026-01-17 00:59:01.615881 | orchestrator | skipping: [testbed-node-2] 2026-01-17 00:59:01.615887 | orchestrator | 2026-01-17 00:59:01.615893 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-01-17 00:59:01.615899 | orchestrator | Saturday 17 January 2026 00:52:56 +0000 (0:00:00.583) 0:05:29.455 ****** 2026-01-17 00:59:01.615905 | orchestrator | ok: [testbed-node-0] 2026-01-17 00:59:01.615924 | orchestrator | ok: [testbed-node-1] 2026-01-17 00:59:01.615931 | orchestrator | ok: [testbed-node-2] 2026-01-17 00:59:01.615937 | orchestrator | 2026-01-17 00:59:01.615942 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-01-17 00:59:01.615955 | orchestrator | Saturday 17 January 2026 00:52:57 +0000 (0:00:00.360) 0:05:29.815 ****** 2026-01-17 00:59:01.615961 | orchestrator | ok: [testbed-node-0] 2026-01-17 00:59:01.615968 | orchestrator | ok: [testbed-node-1] 2026-01-17 00:59:01.615974 | orchestrator | ok: [testbed-node-2] 2026-01-17 00:59:01.615980 | orchestrator | 2026-01-17 00:59:01.615986 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-01-17 00:59:01.615993 | orchestrator | Saturday 17 January 2026 00:52:57 +0000 (0:00:00.408) 0:05:30.224 ****** 2026-01-17 00:59:01.615999 | orchestrator | ok: [testbed-node-0] 2026-01-17 00:59:01.616005 | orchestrator | ok: [testbed-node-1] 2026-01-17 00:59:01.616011 | orchestrator | ok: [testbed-node-2] 2026-01-17 00:59:01.616018 | orchestrator | 2026-01-17 00:59:01.616024 | orchestrator | TASK [ceph-mgr : Set_fact container_exec_cmd] ********************************** 2026-01-17 00:59:01.616030 | orchestrator | Saturday 17 January 2026 00:52:58 +0000 (0:00:00.767) 0:05:30.991 ****** 2026-01-17 00:59:01.616037 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-01-17 00:59:01.616043 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-01-17 00:59:01.616049 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-01-17 00:59:01.616056 | orchestrator | 2026-01-17 00:59:01.616062 | orchestrator | TASK [ceph-mgr : Include common.yml] ******************************************* 2026-01-17 00:59:01.616068 | orchestrator | Saturday 17 January 2026 00:52:59 +0000 (0:00:00.669) 0:05:31.661 ****** 2026-01-17 00:59:01.616075 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/common.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-17 00:59:01.616082 | orchestrator | 2026-01-17 00:59:01.616088 | orchestrator | TASK [ceph-mgr : Create mgr directory] ***************************************** 2026-01-17 00:59:01.616095 | orchestrator | Saturday 17 January 2026 00:52:59 +0000 (0:00:00.546) 0:05:32.207 ****** 2026-01-17 00:59:01.616101 | orchestrator | changed: [testbed-node-0] 2026-01-17 00:59:01.616108 | orchestrator | changed: [testbed-node-1] 2026-01-17 00:59:01.616114 | orchestrator | changed: [testbed-node-2] 2026-01-17 00:59:01.616121 | orchestrator | 2026-01-17 00:59:01.616127 | orchestrator | TASK [ceph-mgr : Fetch ceph mgr keyring] *************************************** 2026-01-17 00:59:01.616134 | orchestrator | Saturday 17 January 2026 00:53:00 +0000 (0:00:00.738) 0:05:32.945 ****** 2026-01-17 00:59:01.616141 | orchestrator | skipping: [testbed-node-0] 2026-01-17 00:59:01.616147 | orchestrator | skipping: [testbed-node-1] 2026-01-17 00:59:01.616153 | orchestrator | skipping: [testbed-node-2] 2026-01-17 00:59:01.616159 | orchestrator | 2026-01-17 00:59:01.616165 | orchestrator | TASK [ceph-mgr : Create ceph mgr keyring(s) on a mon node] ********************* 2026-01-17 00:59:01.616171 | orchestrator | Saturday 17 January 2026 00:53:00 +0000 (0:00:00.558) 0:05:33.503 ****** 2026-01-17 00:59:01.616177 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-01-17 00:59:01.616183 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-01-17 00:59:01.616189 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-01-17 00:59:01.616194 | orchestrator | changed: [testbed-node-0 -> {{ groups[mon_group_name][0] }}] 2026-01-17 00:59:01.616201 | orchestrator | 2026-01-17 00:59:01.616208 | orchestrator | TASK [ceph-mgr : Set_fact _mgr_keys] ******************************************* 2026-01-17 00:59:01.616215 | orchestrator | Saturday 17 January 2026 00:53:11 +0000 (0:00:10.207) 0:05:43.711 ****** 2026-01-17 00:59:01.616222 | orchestrator | ok: [testbed-node-0] 2026-01-17 00:59:01.616228 | orchestrator | ok: [testbed-node-1] 2026-01-17 00:59:01.616235 | orchestrator | ok: [testbed-node-2] 2026-01-17 00:59:01.616241 | orchestrator | 2026-01-17 00:59:01.616248 | orchestrator | TASK [ceph-mgr : Get keys from monitors] *************************************** 2026-01-17 00:59:01.616263 | orchestrator | Saturday 17 January 2026 00:53:11 +0000 (0:00:00.387) 0:05:44.098 ****** 2026-01-17 00:59:01.616269 | orchestrator | skipping: [testbed-node-0] => (item=None)  2026-01-17 00:59:01.616276 | orchestrator | skipping: [testbed-node-1] => (item=None)  2026-01-17 00:59:01.616288 | orchestrator | skipping: [testbed-node-2] => (item=None)  2026-01-17 00:59:01.616296 | orchestrator | ok: [testbed-node-0] => (item=None) 2026-01-17 00:59:01.616303 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-17 00:59:01.616342 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-17 00:59:01.616351 | orchestrator | 2026-01-17 00:59:01.616358 | orchestrator | TASK [ceph-mgr : Copy ceph key(s) if needed] *********************************** 2026-01-17 00:59:01.616364 | orchestrator | Saturday 17 January 2026 00:53:13 +0000 (0:00:02.242) 0:05:46.340 ****** 2026-01-17 00:59:01.616370 | orchestrator | skipping: [testbed-node-0] => (item=None)  2026-01-17 00:59:01.616376 | orchestrator | skipping: [testbed-node-1] => (item=None)  2026-01-17 00:59:01.616383 | orchestrator | skipping: [testbed-node-2] => (item=None)  2026-01-17 00:59:01.616388 | orchestrator | changed: [testbed-node-1] => (item=None) 2026-01-17 00:59:01.616394 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-01-17 00:59:01.616400 | orchestrator | changed: [testbed-node-2] => (item=None) 2026-01-17 00:59:01.616406 | orchestrator | 2026-01-17 00:59:01.616412 | orchestrator | TASK [ceph-mgr : Set mgr key permissions] ************************************** 2026-01-17 00:59:01.616418 | orchestrator | Saturday 17 January 2026 00:53:15 +0000 (0:00:01.375) 0:05:47.716 ****** 2026-01-17 00:59:01.616424 | orchestrator | ok: [testbed-node-0] 2026-01-17 00:59:01.616431 | orchestrator | ok: [testbed-node-1] 2026-01-17 00:59:01.616437 | orchestrator | ok: [testbed-node-2] 2026-01-17 00:59:01.616443 | orchestrator | 2026-01-17 00:59:01.616450 | orchestrator | TASK [ceph-mgr : Append dashboard modules to ceph_mgr_modules] ***************** 2026-01-17 00:59:01.616456 | orchestrator | Saturday 17 January 2026 00:53:16 +0000 (0:00:01.070) 0:05:48.787 ****** 2026-01-17 00:59:01.616462 | orchestrator | skipping: [testbed-node-0] 2026-01-17 00:59:01.616468 | orchestrator | skipping: [testbed-node-1] 2026-01-17 00:59:01.616474 | orchestrator | skipping: [testbed-node-2] 2026-01-17 00:59:01.616480 | orchestrator | 2026-01-17 00:59:01.616486 | orchestrator | TASK [ceph-mgr : Include pre_requisite.yml] ************************************ 2026-01-17 00:59:01.616492 | orchestrator | Saturday 17 January 2026 00:53:16 +0000 (0:00:00.324) 0:05:49.111 ****** 2026-01-17 00:59:01.616498 | orchestrator | skipping: [testbed-node-0] 2026-01-17 00:59:01.616504 | orchestrator | skipping: [testbed-node-1] 2026-01-17 00:59:01.616510 | orchestrator | skipping: [testbed-node-2] 2026-01-17 00:59:01.616516 | orchestrator | 2026-01-17 00:59:01.616523 | orchestrator | TASK [ceph-mgr : Include start_mgr.yml] **************************************** 2026-01-17 00:59:01.616530 | orchestrator | Saturday 17 January 2026 00:53:16 +0000 (0:00:00.326) 0:05:49.437 ****** 2026-01-17 00:59:01.616536 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/start_mgr.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-17 00:59:01.616543 | orchestrator | 2026-01-17 00:59:01.616549 | orchestrator | TASK [ceph-mgr : Ensure systemd service override directory exists] ************* 2026-01-17 00:59:01.616613 | orchestrator | Saturday 17 January 2026 00:53:17 +0000 (0:00:00.733) 0:05:50.171 ****** 2026-01-17 00:59:01.616631 | orchestrator | skipping: [testbed-node-0] 2026-01-17 00:59:01.616636 | orchestrator | skipping: [testbed-node-1] 2026-01-17 00:59:01.616640 | orchestrator | skipping: [testbed-node-2] 2026-01-17 00:59:01.616643 | orchestrator | 2026-01-17 00:59:01.616647 | orchestrator | TASK [ceph-mgr : Add ceph-mgr systemd service overrides] *********************** 2026-01-17 00:59:01.616651 | orchestrator | Saturday 17 January 2026 00:53:18 +0000 (0:00:00.348) 0:05:50.520 ****** 2026-01-17 00:59:01.616655 | orchestrator | skipping: [testbed-node-0] 2026-01-17 00:59:01.616659 | orchestrator | skipping: [testbed-node-1] 2026-01-17 00:59:01.616663 | orchestrator | skipping: [testbed-node-2] 2026-01-17 00:59:01.616666 | orchestrator | 2026-01-17 00:59:01.616670 | orchestrator | TASK [ceph-mgr : Include_tasks systemd.yml] ************************************ 2026-01-17 00:59:01.616674 | orchestrator | Saturday 17 January 2026 00:53:18 +0000 (0:00:00.322) 0:05:50.842 ****** 2026-01-17 00:59:01.616678 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/systemd.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-17 00:59:01.616686 | orchestrator | 2026-01-17 00:59:01.616690 | orchestrator | TASK [ceph-mgr : Generate systemd unit file] *********************************** 2026-01-17 00:59:01.616694 | orchestrator | Saturday 17 January 2026 00:53:19 +0000 (0:00:00.769) 0:05:51.612 ****** 2026-01-17 00:59:01.616698 | orchestrator | changed: [testbed-node-0] 2026-01-17 00:59:01.616701 | orchestrator | changed: [testbed-node-1] 2026-01-17 00:59:01.616705 | orchestrator | changed: [testbed-node-2] 2026-01-17 00:59:01.616709 | orchestrator | 2026-01-17 00:59:01.616713 | orchestrator | TASK [ceph-mgr : Generate systemd ceph-mgr target file] ************************ 2026-01-17 00:59:01.616719 | orchestrator | Saturday 17 January 2026 00:53:20 +0000 (0:00:01.462) 0:05:53.074 ****** 2026-01-17 00:59:01.616722 | orchestrator | changed: [testbed-node-0] 2026-01-17 00:59:01.616726 | orchestrator | changed: [testbed-node-1] 2026-01-17 00:59:01.616730 | orchestrator | changed: [testbed-node-2] 2026-01-17 00:59:01.616734 | orchestrator | 2026-01-17 00:59:01.616737 | orchestrator | TASK [ceph-mgr : Enable ceph-mgr.target] *************************************** 2026-01-17 00:59:01.616741 | orchestrator | Saturday 17 January 2026 00:53:21 +0000 (0:00:01.301) 0:05:54.376 ****** 2026-01-17 00:59:01.616745 | orchestrator | changed: [testbed-node-0] 2026-01-17 00:59:01.616748 | orchestrator | changed: [testbed-node-1] 2026-01-17 00:59:01.616752 | orchestrator | changed: [testbed-node-2] 2026-01-17 00:59:01.616756 | orchestrator | 2026-01-17 00:59:01.616760 | orchestrator | TASK [ceph-mgr : Systemd start mgr] ******************************************** 2026-01-17 00:59:01.616763 | orchestrator | Saturday 17 January 2026 00:53:23 +0000 (0:00:01.961) 0:05:56.337 ****** 2026-01-17 00:59:01.616767 | orchestrator | changed: [testbed-node-1] 2026-01-17 00:59:01.616771 | orchestrator | changed: [testbed-node-0] 2026-01-17 00:59:01.616774 | orchestrator | changed: [testbed-node-2] 2026-01-17 00:59:01.616778 | orchestrator | 2026-01-17 00:59:01.616782 | orchestrator | TASK [ceph-mgr : Include mgr_modules.yml] ************************************** 2026-01-17 00:59:01.616786 | orchestrator | Saturday 17 January 2026 00:53:26 +0000 (0:00:02.480) 0:05:58.818 ****** 2026-01-17 00:59:01.616789 | orchestrator | skipping: [testbed-node-0] 2026-01-17 00:59:01.616793 | orchestrator | skipping: [testbed-node-1] 2026-01-17 00:59:01.616797 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/mgr_modules.yml for testbed-node-2 2026-01-17 00:59:01.616800 | orchestrator | 2026-01-17 00:59:01.616804 | orchestrator | TASK [ceph-mgr : Wait for all mgr to be up] ************************************ 2026-01-17 00:59:01.616808 | orchestrator | Saturday 17 January 2026 00:53:26 +0000 (0:00:00.423) 0:05:59.241 ****** 2026-01-17 00:59:01.616835 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (30 retries left). 2026-01-17 00:59:01.616840 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (29 retries left). 2026-01-17 00:59:01.616844 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (28 retries left). 2026-01-17 00:59:01.616848 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (27 retries left). 2026-01-17 00:59:01.616851 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (26 retries left). 2026-01-17 00:59:01.616855 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (25 retries left). 2026-01-17 00:59:01.616859 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2026-01-17 00:59:01.616863 | orchestrator | 2026-01-17 00:59:01.616866 | orchestrator | TASK [ceph-mgr : Get enabled modules from ceph-mgr] **************************** 2026-01-17 00:59:01.616870 | orchestrator | Saturday 17 January 2026 00:54:03 +0000 (0:00:36.422) 0:06:35.664 ****** 2026-01-17 00:59:01.616874 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2026-01-17 00:59:01.616878 | orchestrator | 2026-01-17 00:59:01.616882 | orchestrator | TASK [ceph-mgr : Set _ceph_mgr_modules fact (convert _ceph_mgr_modules.stdout to a dict)] *** 2026-01-17 00:59:01.616885 | orchestrator | Saturday 17 January 2026 00:54:04 +0000 (0:00:01.310) 0:06:36.974 ****** 2026-01-17 00:59:01.616892 | orchestrator | ok: [testbed-node-2] 2026-01-17 00:59:01.616896 | orchestrator | 2026-01-17 00:59:01.616900 | orchestrator | TASK [ceph-mgr : Set _disabled_ceph_mgr_modules fact] ************************** 2026-01-17 00:59:01.616904 | orchestrator | Saturday 17 January 2026 00:54:04 +0000 (0:00:00.327) 0:06:37.302 ****** 2026-01-17 00:59:01.616943 | orchestrator | ok: [testbed-node-2] 2026-01-17 00:59:01.616947 | orchestrator | 2026-01-17 00:59:01.616951 | orchestrator | TASK [ceph-mgr : Disable ceph mgr enabled modules] ***************************** 2026-01-17 00:59:01.616955 | orchestrator | Saturday 17 January 2026 00:54:04 +0000 (0:00:00.136) 0:06:37.438 ****** 2026-01-17 00:59:01.616959 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=iostat) 2026-01-17 00:59:01.616962 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=nfs) 2026-01-17 00:59:01.616966 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=restful) 2026-01-17 00:59:01.616970 | orchestrator | 2026-01-17 00:59:01.616974 | orchestrator | TASK [ceph-mgr : Add modules to ceph-mgr] ************************************** 2026-01-17 00:59:01.616977 | orchestrator | Saturday 17 January 2026 00:54:11 +0000 (0:00:06.751) 0:06:44.190 ****** 2026-01-17 00:59:01.616981 | orchestrator | skipping: [testbed-node-2] => (item=balancer)  2026-01-17 00:59:01.616985 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=dashboard) 2026-01-17 00:59:01.616988 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=prometheus) 2026-01-17 00:59:01.616992 | orchestrator | skipping: [testbed-node-2] => (item=status)  2026-01-17 00:59:01.616996 | orchestrator | 2026-01-17 00:59:01.617000 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-01-17 00:59:01.617003 | orchestrator | Saturday 17 January 2026 00:54:17 +0000 (0:00:05.347) 0:06:49.538 ****** 2026-01-17 00:59:01.617007 | orchestrator | changed: [testbed-node-0] 2026-01-17 00:59:01.617011 | orchestrator | changed: [testbed-node-1] 2026-01-17 00:59:01.617014 | orchestrator | changed: [testbed-node-2] 2026-01-17 00:59:01.617018 | orchestrator | 2026-01-17 00:59:01.617022 | orchestrator | RUNNING HANDLER [ceph-handler : Mgrs handler] ********************************** 2026-01-17 00:59:01.617026 | orchestrator | Saturday 17 January 2026 00:54:17 +0000 (0:00:00.742) 0:06:50.280 ****** 2026-01-17 00:59:01.617029 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mgrs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-17 00:59:01.617033 | orchestrator | 2026-01-17 00:59:01.617037 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called before restart] ******** 2026-01-17 00:59:01.617043 | orchestrator | Saturday 17 January 2026 00:54:18 +0000 (0:00:00.794) 0:06:51.074 ****** 2026-01-17 00:59:01.617047 | orchestrator | ok: [testbed-node-0] 2026-01-17 00:59:01.617050 | orchestrator | ok: [testbed-node-1] 2026-01-17 00:59:01.617054 | orchestrator | ok: [testbed-node-2] 2026-01-17 00:59:01.617058 | orchestrator | 2026-01-17 00:59:01.617062 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mgr restart script] *********************** 2026-01-17 00:59:01.617065 | orchestrator | Saturday 17 January 2026 00:54:18 +0000 (0:00:00.350) 0:06:51.425 ****** 2026-01-17 00:59:01.617069 | orchestrator | changed: [testbed-node-0] 2026-01-17 00:59:01.617073 | orchestrator | changed: [testbed-node-1] 2026-01-17 00:59:01.617077 | orchestrator | changed: [testbed-node-2] 2026-01-17 00:59:01.617081 | orchestrator | 2026-01-17 00:59:01.617084 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mgr daemon(s)] ******************** 2026-01-17 00:59:01.617088 | orchestrator | Saturday 17 January 2026 00:54:20 +0000 (0:00:01.270) 0:06:52.696 ****** 2026-01-17 00:59:01.617092 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-01-17 00:59:01.617096 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-01-17 00:59:01.617099 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-01-17 00:59:01.617103 | orchestrator | skipping: [testbed-node-0] 2026-01-17 00:59:01.617107 | orchestrator | 2026-01-17 00:59:01.617110 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called after restart] ********* 2026-01-17 00:59:01.617117 | orchestrator | Saturday 17 January 2026 00:54:21 +0000 (0:00:00.860) 0:06:53.556 ****** 2026-01-17 00:59:01.617121 | orchestrator | ok: [testbed-node-0] 2026-01-17 00:59:01.617125 | orchestrator | ok: [testbed-node-1] 2026-01-17 00:59:01.617128 | orchestrator | ok: [testbed-node-2] 2026-01-17 00:59:01.617132 | orchestrator | 2026-01-17 00:59:01.617136 | orchestrator | PLAY [Apply role ceph-osd] ***************************************************** 2026-01-17 00:59:01.617139 | orchestrator | 2026-01-17 00:59:01.617143 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-01-17 00:59:01.617162 | orchestrator | Saturday 17 January 2026 00:54:21 +0000 (0:00:00.845) 0:06:54.402 ****** 2026-01-17 00:59:01.617166 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-17 00:59:01.617172 | orchestrator | 2026-01-17 00:59:01.617179 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-01-17 00:59:01.617185 | orchestrator | Saturday 17 January 2026 00:54:22 +0000 (0:00:00.542) 0:06:54.944 ****** 2026-01-17 00:59:01.617192 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-17 00:59:01.617198 | orchestrator | 2026-01-17 00:59:01.617202 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-01-17 00:59:01.617206 | orchestrator | Saturday 17 January 2026 00:54:23 +0000 (0:00:00.771) 0:06:55.716 ****** 2026-01-17 00:59:01.617210 | orchestrator | skipping: [testbed-node-3] 2026-01-17 00:59:01.617214 | orchestrator | skipping: [testbed-node-4] 2026-01-17 00:59:01.617217 | orchestrator | skipping: [testbed-node-5] 2026-01-17 00:59:01.617221 | orchestrator | 2026-01-17 00:59:01.617225 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-01-17 00:59:01.617229 | orchestrator | Saturday 17 January 2026 00:54:23 +0000 (0:00:00.309) 0:06:56.026 ****** 2026-01-17 00:59:01.617233 | orchestrator | ok: [testbed-node-3] 2026-01-17 00:59:01.617236 | orchestrator | ok: [testbed-node-4] 2026-01-17 00:59:01.617240 | orchestrator | ok: [testbed-node-5] 2026-01-17 00:59:01.617244 | orchestrator | 2026-01-17 00:59:01.617247 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-01-17 00:59:01.617251 | orchestrator | Saturday 17 January 2026 00:54:24 +0000 (0:00:00.793) 0:06:56.820 ****** 2026-01-17 00:59:01.617255 | orchestrator | ok: [testbed-node-3] 2026-01-17 00:59:01.617259 | orchestrator | ok: [testbed-node-4] 2026-01-17 00:59:01.617263 | orchestrator | ok: [testbed-node-5] 2026-01-17 00:59:01.617266 | orchestrator | 2026-01-17 00:59:01.617270 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-01-17 00:59:01.617274 | orchestrator | Saturday 17 January 2026 00:54:25 +0000 (0:00:00.793) 0:06:57.614 ****** 2026-01-17 00:59:01.617278 | orchestrator | ok: [testbed-node-4] 2026-01-17 00:59:01.617281 | orchestrator | ok: [testbed-node-3] 2026-01-17 00:59:01.617285 | orchestrator | ok: [testbed-node-5] 2026-01-17 00:59:01.617289 | orchestrator | 2026-01-17 00:59:01.617293 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-01-17 00:59:01.617296 | orchestrator | Saturday 17 January 2026 00:54:26 +0000 (0:00:01.245) 0:06:58.859 ****** 2026-01-17 00:59:01.617300 | orchestrator | skipping: [testbed-node-3] 2026-01-17 00:59:01.617304 | orchestrator | skipping: [testbed-node-4] 2026-01-17 00:59:01.617308 | orchestrator | skipping: [testbed-node-5] 2026-01-17 00:59:01.617311 | orchestrator | 2026-01-17 00:59:01.617315 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-01-17 00:59:01.617319 | orchestrator | Saturday 17 January 2026 00:54:26 +0000 (0:00:00.338) 0:06:59.198 ****** 2026-01-17 00:59:01.617323 | orchestrator | skipping: [testbed-node-3] 2026-01-17 00:59:01.617327 | orchestrator | skipping: [testbed-node-4] 2026-01-17 00:59:01.617330 | orchestrator | skipping: [testbed-node-5] 2026-01-17 00:59:01.617334 | orchestrator | 2026-01-17 00:59:01.617338 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-01-17 00:59:01.617345 | orchestrator | Saturday 17 January 2026 00:54:27 +0000 (0:00:00.332) 0:06:59.531 ****** 2026-01-17 00:59:01.617349 | orchestrator | skipping: [testbed-node-3] 2026-01-17 00:59:01.617353 | orchestrator | skipping: [testbed-node-4] 2026-01-17 00:59:01.617357 | orchestrator | skipping: [testbed-node-5] 2026-01-17 00:59:01.617360 | orchestrator | 2026-01-17 00:59:01.617364 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-01-17 00:59:01.617368 | orchestrator | Saturday 17 January 2026 00:54:27 +0000 (0:00:00.307) 0:06:59.839 ****** 2026-01-17 00:59:01.617372 | orchestrator | ok: [testbed-node-3] 2026-01-17 00:59:01.617375 | orchestrator | ok: [testbed-node-4] 2026-01-17 00:59:01.617379 | orchestrator | ok: [testbed-node-5] 2026-01-17 00:59:01.617383 | orchestrator | 2026-01-17 00:59:01.617387 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-01-17 00:59:01.617390 | orchestrator | Saturday 17 January 2026 00:54:28 +0000 (0:00:01.108) 0:07:00.947 ****** 2026-01-17 00:59:01.617394 | orchestrator | ok: [testbed-node-3] 2026-01-17 00:59:01.617400 | orchestrator | ok: [testbed-node-4] 2026-01-17 00:59:01.617404 | orchestrator | ok: [testbed-node-5] 2026-01-17 00:59:01.617415 | orchestrator | 2026-01-17 00:59:01.617423 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-01-17 00:59:01.617426 | orchestrator | Saturday 17 January 2026 00:54:29 +0000 (0:00:00.880) 0:07:01.827 ****** 2026-01-17 00:59:01.617430 | orchestrator | skipping: [testbed-node-3] 2026-01-17 00:59:01.617434 | orchestrator | skipping: [testbed-node-4] 2026-01-17 00:59:01.617438 | orchestrator | skipping: [testbed-node-5] 2026-01-17 00:59:01.617441 | orchestrator | 2026-01-17 00:59:01.617445 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-01-17 00:59:01.617449 | orchestrator | Saturday 17 January 2026 00:54:29 +0000 (0:00:00.301) 0:07:02.128 ****** 2026-01-17 00:59:01.617453 | orchestrator | skipping: [testbed-node-4] 2026-01-17 00:59:01.617456 | orchestrator | skipping: [testbed-node-3] 2026-01-17 00:59:01.617460 | orchestrator | skipping: [testbed-node-5] 2026-01-17 00:59:01.617464 | orchestrator | 2026-01-17 00:59:01.617468 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-01-17 00:59:01.617471 | orchestrator | Saturday 17 January 2026 00:54:29 +0000 (0:00:00.332) 0:07:02.461 ****** 2026-01-17 00:59:01.617475 | orchestrator | ok: [testbed-node-3] 2026-01-17 00:59:01.617479 | orchestrator | ok: [testbed-node-4] 2026-01-17 00:59:01.617482 | orchestrator | ok: [testbed-node-5] 2026-01-17 00:59:01.617486 | orchestrator | 2026-01-17 00:59:01.617492 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-01-17 00:59:01.617499 | orchestrator | Saturday 17 January 2026 00:54:30 +0000 (0:00:00.656) 0:07:03.117 ****** 2026-01-17 00:59:01.617506 | orchestrator | ok: [testbed-node-3] 2026-01-17 00:59:01.617512 | orchestrator | ok: [testbed-node-4] 2026-01-17 00:59:01.617518 | orchestrator | ok: [testbed-node-5] 2026-01-17 00:59:01.617524 | orchestrator | 2026-01-17 00:59:01.617530 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-01-17 00:59:01.617552 | orchestrator | Saturday 17 January 2026 00:54:30 +0000 (0:00:00.361) 0:07:03.479 ****** 2026-01-17 00:59:01.617558 | orchestrator | ok: [testbed-node-3] 2026-01-17 00:59:01.617561 | orchestrator | ok: [testbed-node-4] 2026-01-17 00:59:01.617565 | orchestrator | ok: [testbed-node-5] 2026-01-17 00:59:01.617569 | orchestrator | 2026-01-17 00:59:01.617572 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-01-17 00:59:01.617576 | orchestrator | Saturday 17 January 2026 00:54:31 +0000 (0:00:00.363) 0:07:03.843 ****** 2026-01-17 00:59:01.617582 | orchestrator | skipping: [testbed-node-3] 2026-01-17 00:59:01.617589 | orchestrator | skipping: [testbed-node-4] 2026-01-17 00:59:01.617595 | orchestrator | skipping: [testbed-node-5] 2026-01-17 00:59:01.617601 | orchestrator | 2026-01-17 00:59:01.617608 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-01-17 00:59:01.617614 | orchestrator | Saturday 17 January 2026 00:54:31 +0000 (0:00:00.355) 0:07:04.198 ****** 2026-01-17 00:59:01.617625 | orchestrator | skipping: [testbed-node-3] 2026-01-17 00:59:01.617630 | orchestrator | skipping: [testbed-node-4] 2026-01-17 00:59:01.617636 | orchestrator | skipping: [testbed-node-5] 2026-01-17 00:59:01.617641 | orchestrator | 2026-01-17 00:59:01.617647 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-01-17 00:59:01.617653 | orchestrator | Saturday 17 January 2026 00:54:32 +0000 (0:00:00.577) 0:07:04.775 ****** 2026-01-17 00:59:01.617659 | orchestrator | skipping: [testbed-node-3] 2026-01-17 00:59:01.617665 | orchestrator | skipping: [testbed-node-4] 2026-01-17 00:59:01.617672 | orchestrator | skipping: [testbed-node-5] 2026-01-17 00:59:01.617677 | orchestrator | 2026-01-17 00:59:01.617684 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-01-17 00:59:01.617690 | orchestrator | Saturday 17 January 2026 00:54:32 +0000 (0:00:00.316) 0:07:05.092 ****** 2026-01-17 00:59:01.617696 | orchestrator | ok: [testbed-node-3] 2026-01-17 00:59:01.617703 | orchestrator | ok: [testbed-node-4] 2026-01-17 00:59:01.617709 | orchestrator | ok: [testbed-node-5] 2026-01-17 00:59:01.617716 | orchestrator | 2026-01-17 00:59:01.617723 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-01-17 00:59:01.617730 | orchestrator | Saturday 17 January 2026 00:54:32 +0000 (0:00:00.355) 0:07:05.447 ****** 2026-01-17 00:59:01.617737 | orchestrator | ok: [testbed-node-3] 2026-01-17 00:59:01.617743 | orchestrator | ok: [testbed-node-4] 2026-01-17 00:59:01.617750 | orchestrator | ok: [testbed-node-5] 2026-01-17 00:59:01.617757 | orchestrator | 2026-01-17 00:59:01.617764 | orchestrator | TASK [ceph-osd : Set_fact add_osd] ********************************************* 2026-01-17 00:59:01.617771 | orchestrator | Saturday 17 January 2026 00:54:33 +0000 (0:00:00.800) 0:07:06.247 ****** 2026-01-17 00:59:01.617778 | orchestrator | ok: [testbed-node-3] 2026-01-17 00:59:01.617784 | orchestrator | ok: [testbed-node-4] 2026-01-17 00:59:01.617790 | orchestrator | ok: [testbed-node-5] 2026-01-17 00:59:01.617796 | orchestrator | 2026-01-17 00:59:01.617801 | orchestrator | TASK [ceph-osd : Set_fact container_exec_cmd] ********************************** 2026-01-17 00:59:01.617807 | orchestrator | Saturday 17 January 2026 00:54:34 +0000 (0:00:00.328) 0:07:06.575 ****** 2026-01-17 00:59:01.617813 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-01-17 00:59:01.617820 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-01-17 00:59:01.617827 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-01-17 00:59:01.617833 | orchestrator | 2026-01-17 00:59:01.617840 | orchestrator | TASK [ceph-osd : Include_tasks system_tuning.yml] ****************************** 2026-01-17 00:59:01.617846 | orchestrator | Saturday 17 January 2026 00:54:34 +0000 (0:00:00.599) 0:07:07.175 ****** 2026-01-17 00:59:01.617853 | orchestrator | included: /ansible/roles/ceph-osd/tasks/system_tuning.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-17 00:59:01.617860 | orchestrator | 2026-01-17 00:59:01.617868 | orchestrator | TASK [ceph-osd : Create tmpfiles.d directory] ********************************** 2026-01-17 00:59:01.617875 | orchestrator | Saturday 17 January 2026 00:54:35 +0000 (0:00:00.539) 0:07:07.715 ****** 2026-01-17 00:59:01.617882 | orchestrator | skipping: [testbed-node-3] 2026-01-17 00:59:01.617889 | orchestrator | skipping: [testbed-node-4] 2026-01-17 00:59:01.617896 | orchestrator | skipping: [testbed-node-5] 2026-01-17 00:59:01.617903 | orchestrator | 2026-01-17 00:59:01.617923 | orchestrator | TASK [ceph-osd : Disable transparent hugepage] ********************************* 2026-01-17 00:59:01.617934 | orchestrator | Saturday 17 January 2026 00:54:35 +0000 (0:00:00.584) 0:07:08.300 ****** 2026-01-17 00:59:01.617941 | orchestrator | skipping: [testbed-node-3] 2026-01-17 00:59:01.617948 | orchestrator | skipping: [testbed-node-4] 2026-01-17 00:59:01.617954 | orchestrator | skipping: [testbed-node-5] 2026-01-17 00:59:01.617960 | orchestrator | 2026-01-17 00:59:01.617967 | orchestrator | TASK [ceph-osd : Get default vm.min_free_kbytes] ******************************* 2026-01-17 00:59:01.617973 | orchestrator | Saturday 17 January 2026 00:54:36 +0000 (0:00:00.333) 0:07:08.633 ****** 2026-01-17 00:59:01.617984 | orchestrator | ok: [testbed-node-3] 2026-01-17 00:59:01.617991 | orchestrator | ok: [testbed-node-4] 2026-01-17 00:59:01.617997 | orchestrator | ok: [testbed-node-5] 2026-01-17 00:59:01.618004 | orchestrator | 2026-01-17 00:59:01.618010 | orchestrator | TASK [ceph-osd : Set_fact vm_min_free_kbytes] ********************************** 2026-01-17 00:59:01.618038 | orchestrator | Saturday 17 January 2026 00:54:36 +0000 (0:00:00.638) 0:07:09.271 ****** 2026-01-17 00:59:01.618042 | orchestrator | ok: [testbed-node-3] 2026-01-17 00:59:01.618046 | orchestrator | ok: [testbed-node-4] 2026-01-17 00:59:01.618050 | orchestrator | ok: [testbed-node-5] 2026-01-17 00:59:01.618054 | orchestrator | 2026-01-17 00:59:01.618058 | orchestrator | TASK [ceph-osd : Apply operating system tuning] ******************************** 2026-01-17 00:59:01.618061 | orchestrator | Saturday 17 January 2026 00:54:37 +0000 (0:00:00.323) 0:07:09.595 ****** 2026-01-17 00:59:01.618065 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2026-01-17 00:59:01.618069 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2026-01-17 00:59:01.618073 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.file-max', 'value': 26234859}) 2026-01-17 00:59:01.618082 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2026-01-17 00:59:01.618086 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.file-max', 'value': 26234859}) 2026-01-17 00:59:01.618090 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2026-01-17 00:59:01.618093 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2026-01-17 00:59:01.618097 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.file-max', 'value': 26234859}) 2026-01-17 00:59:01.618101 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2026-01-17 00:59:01.618105 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.swappiness', 'value': 10}) 2026-01-17 00:59:01.618109 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.swappiness', 'value': 10}) 2026-01-17 00:59:01.618112 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2026-01-17 00:59:01.618116 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2026-01-17 00:59:01.618120 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.swappiness', 'value': 10}) 2026-01-17 00:59:01.618126 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2026-01-17 00:59:01.618134 | orchestrator | 2026-01-17 00:59:01.618145 | orchestrator | TASK [ceph-osd : Install dependencies] ***************************************** 2026-01-17 00:59:01.618151 | orchestrator | Saturday 17 January 2026 00:54:41 +0000 (0:00:04.208) 0:07:13.804 ****** 2026-01-17 00:59:01.618157 | orchestrator | skipping: [testbed-node-3] 2026-01-17 00:59:01.618163 | orchestrator | skipping: [testbed-node-4] 2026-01-17 00:59:01.618170 | orchestrator | skipping: [testbed-node-5] 2026-01-17 00:59:01.618176 | orchestrator | 2026-01-17 00:59:01.618182 | orchestrator | TASK [ceph-osd : Include_tasks common.yml] ************************************* 2026-01-17 00:59:01.618187 | orchestrator | Saturday 17 January 2026 00:54:41 +0000 (0:00:00.341) 0:07:14.145 ****** 2026-01-17 00:59:01.618193 | orchestrator | included: /ansible/roles/ceph-osd/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-17 00:59:01.618199 | orchestrator | 2026-01-17 00:59:01.618205 | orchestrator | TASK [ceph-osd : Create bootstrap-osd and osd directories] ********************* 2026-01-17 00:59:01.618211 | orchestrator | Saturday 17 January 2026 00:54:42 +0000 (0:00:00.628) 0:07:14.773 ****** 2026-01-17 00:59:01.618218 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-osd/) 2026-01-17 00:59:01.618224 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-osd/) 2026-01-17 00:59:01.618230 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/osd/) 2026-01-17 00:59:01.618241 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/osd/) 2026-01-17 00:59:01.618248 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-osd/) 2026-01-17 00:59:01.618255 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/osd/) 2026-01-17 00:59:01.618261 | orchestrator | 2026-01-17 00:59:01.618267 | orchestrator | TASK [ceph-osd : Get keys from monitors] *************************************** 2026-01-17 00:59:01.618273 | orchestrator | Saturday 17 January 2026 00:54:43 +0000 (0:00:01.385) 0:07:16.159 ****** 2026-01-17 00:59:01.618280 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-17 00:59:01.618284 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-01-17 00:59:01.618288 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2026-01-17 00:59:01.618292 | orchestrator | 2026-01-17 00:59:01.618295 | orchestrator | TASK [ceph-osd : Copy ceph key(s) if needed] *********************************** 2026-01-17 00:59:01.618299 | orchestrator | Saturday 17 January 2026 00:54:45 +0000 (0:00:02.160) 0:07:18.320 ****** 2026-01-17 00:59:01.618303 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-01-17 00:59:01.618307 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-01-17 00:59:01.618311 | orchestrator | changed: [testbed-node-3] 2026-01-17 00:59:01.618315 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-01-17 00:59:01.618322 | orchestrator | skipping: [testbed-node-4] => (item=None)  2026-01-17 00:59:01.618326 | orchestrator | changed: [testbed-node-4] 2026-01-17 00:59:01.618330 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-01-17 00:59:01.618333 | orchestrator | skipping: [testbed-node-5] => (item=None)  2026-01-17 00:59:01.618337 | orchestrator | changed: [testbed-node-5] 2026-01-17 00:59:01.618341 | orchestrator | 2026-01-17 00:59:01.618347 | orchestrator | TASK [ceph-osd : Set noup flag] ************************************************ 2026-01-17 00:59:01.618352 | orchestrator | Saturday 17 January 2026 00:54:47 +0000 (0:00:01.288) 0:07:19.608 ****** 2026-01-17 00:59:01.618356 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-01-17 00:59:01.618359 | orchestrator | 2026-01-17 00:59:01.618363 | orchestrator | TASK [ceph-osd : Include_tasks scenarios/lvm.yml] ****************************** 2026-01-17 00:59:01.618367 | orchestrator | Saturday 17 January 2026 00:54:49 +0000 (0:00:02.154) 0:07:21.763 ****** 2026-01-17 00:59:01.618371 | orchestrator | included: /ansible/roles/ceph-osd/tasks/scenarios/lvm.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-17 00:59:01.618374 | orchestrator | 2026-01-17 00:59:01.618378 | orchestrator | TASK [ceph-osd : Use ceph-volume to create osds] ******************************* 2026-01-17 00:59:01.618382 | orchestrator | Saturday 17 January 2026 00:54:50 +0000 (0:00:00.830) 0:07:22.593 ****** 2026-01-17 00:59:01.618386 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-6f2a493f-ee42-5e89-bc68-fb4f7dc1b165', 'data_vg': 'ceph-6f2a493f-ee42-5e89-bc68-fb4f7dc1b165'}) 2026-01-17 00:59:01.618396 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-c5f49b22-d40f-5ab7-98f7-9762e23da2c0', 'data_vg': 'ceph-c5f49b22-d40f-5ab7-98f7-9762e23da2c0'}) 2026-01-17 00:59:01.618400 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-a3dfbdd8-de3c-56f7-9997-9a9b5f483001', 'data_vg': 'ceph-a3dfbdd8-de3c-56f7-9997-9a9b5f483001'}) 2026-01-17 00:59:01.618404 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-2051e43b-6678-567a-85ad-b7e1187d21ae', 'data_vg': 'ceph-2051e43b-6678-567a-85ad-b7e1187d21ae'}) 2026-01-17 00:59:01.618408 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-fbc9b557-fafa-5136-b4c6-7d286dd557bb', 'data_vg': 'ceph-fbc9b557-fafa-5136-b4c6-7d286dd557bb'}) 2026-01-17 00:59:01.618411 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-68934a0c-2b18-58d2-8851-459d4d664360', 'data_vg': 'ceph-68934a0c-2b18-58d2-8851-459d4d664360'}) 2026-01-17 00:59:01.618415 | orchestrator | 2026-01-17 00:59:01.618419 | orchestrator | TASK [ceph-osd : Include_tasks scenarios/lvm-batch.yml] ************************ 2026-01-17 00:59:01.618423 | orchestrator | Saturday 17 January 2026 00:55:34 +0000 (0:00:43.981) 0:08:06.575 ****** 2026-01-17 00:59:01.618430 | orchestrator | skipping: [testbed-node-3] 2026-01-17 00:59:01.618434 | orchestrator | skipping: [testbed-node-4] 2026-01-17 00:59:01.618437 | orchestrator | skipping: [testbed-node-5] 2026-01-17 00:59:01.618441 | orchestrator | 2026-01-17 00:59:01.618445 | orchestrator | TASK [ceph-osd : Include_tasks start_osds.yml] ********************************* 2026-01-17 00:59:01.618448 | orchestrator | Saturday 17 January 2026 00:55:34 +0000 (0:00:00.314) 0:08:06.889 ****** 2026-01-17 00:59:01.618452 | orchestrator | included: /ansible/roles/ceph-osd/tasks/start_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-17 00:59:01.618456 | orchestrator | 2026-01-17 00:59:01.618460 | orchestrator | TASK [ceph-osd : Get osd ids] ************************************************** 2026-01-17 00:59:01.618463 | orchestrator | Saturday 17 January 2026 00:55:35 +0000 (0:00:00.816) 0:08:07.706 ****** 2026-01-17 00:59:01.618467 | orchestrator | ok: [testbed-node-3] 2026-01-17 00:59:01.618471 | orchestrator | ok: [testbed-node-4] 2026-01-17 00:59:01.618474 | orchestrator | ok: [testbed-node-5] 2026-01-17 00:59:01.618478 | orchestrator | 2026-01-17 00:59:01.618482 | orchestrator | TASK [ceph-osd : Collect osd ids] ********************************************** 2026-01-17 00:59:01.618485 | orchestrator | Saturday 17 January 2026 00:55:35 +0000 (0:00:00.763) 0:08:08.469 ****** 2026-01-17 00:59:01.618489 | orchestrator | ok: [testbed-node-4] 2026-01-17 00:59:01.618493 | orchestrator | ok: [testbed-node-3] 2026-01-17 00:59:01.618497 | orchestrator | ok: [testbed-node-5] 2026-01-17 00:59:01.618500 | orchestrator | 2026-01-17 00:59:01.618504 | orchestrator | TASK [ceph-osd : Include_tasks systemd.yml] ************************************ 2026-01-17 00:59:01.618508 | orchestrator | Saturday 17 January 2026 00:55:38 +0000 (0:00:03.009) 0:08:11.479 ****** 2026-01-17 00:59:01.618511 | orchestrator | included: /ansible/roles/ceph-osd/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-17 00:59:01.618515 | orchestrator | 2026-01-17 00:59:01.618519 | orchestrator | TASK [ceph-osd : Generate systemd unit file] *********************************** 2026-01-17 00:59:01.618523 | orchestrator | Saturday 17 January 2026 00:55:39 +0000 (0:00:00.781) 0:08:12.260 ****** 2026-01-17 00:59:01.618526 | orchestrator | changed: [testbed-node-3] 2026-01-17 00:59:01.618530 | orchestrator | changed: [testbed-node-4] 2026-01-17 00:59:01.618534 | orchestrator | changed: [testbed-node-5] 2026-01-17 00:59:01.618538 | orchestrator | 2026-01-17 00:59:01.618541 | orchestrator | TASK [ceph-osd : Generate systemd ceph-osd target file] ************************ 2026-01-17 00:59:01.618545 | orchestrator | Saturday 17 January 2026 00:55:41 +0000 (0:00:01.389) 0:08:13.649 ****** 2026-01-17 00:59:01.618549 | orchestrator | changed: [testbed-node-3] 2026-01-17 00:59:01.618553 | orchestrator | changed: [testbed-node-4] 2026-01-17 00:59:01.618556 | orchestrator | changed: [testbed-node-5] 2026-01-17 00:59:01.618560 | orchestrator | 2026-01-17 00:59:01.618564 | orchestrator | TASK [ceph-osd : Enable ceph-osd.target] *************************************** 2026-01-17 00:59:01.618567 | orchestrator | Saturday 17 January 2026 00:55:42 +0000 (0:00:01.267) 0:08:14.916 ****** 2026-01-17 00:59:01.618571 | orchestrator | changed: [testbed-node-3] 2026-01-17 00:59:01.618575 | orchestrator | changed: [testbed-node-4] 2026-01-17 00:59:01.618578 | orchestrator | changed: [testbed-node-5] 2026-01-17 00:59:01.618582 | orchestrator | 2026-01-17 00:59:01.618590 | orchestrator | TASK [ceph-osd : Ensure systemd service override directory exists] ************* 2026-01-17 00:59:01.618594 | orchestrator | Saturday 17 January 2026 00:55:44 +0000 (0:00:02.087) 0:08:17.004 ****** 2026-01-17 00:59:01.618597 | orchestrator | skipping: [testbed-node-3] 2026-01-17 00:59:01.618601 | orchestrator | skipping: [testbed-node-4] 2026-01-17 00:59:01.618605 | orchestrator | skipping: [testbed-node-5] 2026-01-17 00:59:01.618609 | orchestrator | 2026-01-17 00:59:01.618612 | orchestrator | TASK [ceph-osd : Add ceph-osd systemd service overrides] *********************** 2026-01-17 00:59:01.618616 | orchestrator | Saturday 17 January 2026 00:55:45 +0000 (0:00:00.619) 0:08:17.624 ****** 2026-01-17 00:59:01.618620 | orchestrator | skipping: [testbed-node-3] 2026-01-17 00:59:01.618624 | orchestrator | skipping: [testbed-node-4] 2026-01-17 00:59:01.618632 | orchestrator | skipping: [testbed-node-5] 2026-01-17 00:59:01.618635 | orchestrator | 2026-01-17 00:59:01.618639 | orchestrator | TASK [ceph-osd : Ensure /var/lib/ceph/osd/- is present] ********* 2026-01-17 00:59:01.618643 | orchestrator | Saturday 17 January 2026 00:55:45 +0000 (0:00:00.344) 0:08:17.968 ****** 2026-01-17 00:59:01.618646 | orchestrator | ok: [testbed-node-3] => (item=4) 2026-01-17 00:59:01.618650 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-01-17 00:59:01.618654 | orchestrator | ok: [testbed-node-5] => (item=2) 2026-01-17 00:59:01.618659 | orchestrator | ok: [testbed-node-3] => (item=1) 2026-01-17 00:59:01.618666 | orchestrator | ok: [testbed-node-4] => (item=3) 2026-01-17 00:59:01.618673 | orchestrator | ok: [testbed-node-5] => (item=5) 2026-01-17 00:59:01.618679 | orchestrator | 2026-01-17 00:59:01.618686 | orchestrator | TASK [ceph-osd : Write run file in /var/lib/ceph/osd/xxxx/run] ***************** 2026-01-17 00:59:01.618692 | orchestrator | Saturday 17 January 2026 00:55:46 +0000 (0:00:01.167) 0:08:19.135 ****** 2026-01-17 00:59:01.618698 | orchestrator | changed: [testbed-node-3] => (item=4) 2026-01-17 00:59:01.618702 | orchestrator | changed: [testbed-node-4] => (item=0) 2026-01-17 00:59:01.618708 | orchestrator | changed: [testbed-node-5] => (item=2) 2026-01-17 00:59:01.618712 | orchestrator | changed: [testbed-node-3] => (item=1) 2026-01-17 00:59:01.618716 | orchestrator | changed: [testbed-node-4] => (item=3) 2026-01-17 00:59:01.618719 | orchestrator | changed: [testbed-node-5] => (item=5) 2026-01-17 00:59:01.618723 | orchestrator | 2026-01-17 00:59:01.618727 | orchestrator | TASK [ceph-osd : Systemd start osd] ******************************************** 2026-01-17 00:59:01.618731 | orchestrator | Saturday 17 January 2026 00:55:49 +0000 (0:00:02.463) 0:08:21.598 ****** 2026-01-17 00:59:01.618734 | orchestrator | changed: [testbed-node-3] => (item=4) 2026-01-17 00:59:01.618738 | orchestrator | changed: [testbed-node-4] => (item=0) 2026-01-17 00:59:01.618742 | orchestrator | changed: [testbed-node-5] => (item=2) 2026-01-17 00:59:01.618746 | orchestrator | changed: [testbed-node-3] => (item=1) 2026-01-17 00:59:01.618749 | orchestrator | changed: [testbed-node-4] => (item=3) 2026-01-17 00:59:01.618753 | orchestrator | changed: [testbed-node-5] => (item=5) 2026-01-17 00:59:01.618757 | orchestrator | 2026-01-17 00:59:01.618761 | orchestrator | TASK [ceph-osd : Unset noup flag] ********************************************** 2026-01-17 00:59:01.618764 | orchestrator | Saturday 17 January 2026 00:55:53 +0000 (0:00:04.475) 0:08:26.074 ****** 2026-01-17 00:59:01.618768 | orchestrator | skipping: [testbed-node-3] 2026-01-17 00:59:01.618772 | orchestrator | skipping: [testbed-node-4] 2026-01-17 00:59:01.618775 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2026-01-17 00:59:01.618779 | orchestrator | 2026-01-17 00:59:01.618783 | orchestrator | TASK [ceph-osd : Wait for all osd to be up] ************************************ 2026-01-17 00:59:01.618786 | orchestrator | Saturday 17 January 2026 00:55:55 +0000 (0:00:02.232) 0:08:28.307 ****** 2026-01-17 00:59:01.618790 | orchestrator | skipping: [testbed-node-3] 2026-01-17 00:59:01.618794 | orchestrator | skipping: [testbed-node-4] 2026-01-17 00:59:01.618798 | orchestrator | FAILED - RETRYING: [testbed-node-5 -> testbed-node-0]: Wait for all osd to be up (60 retries left). 2026-01-17 00:59:01.618801 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2026-01-17 00:59:01.618805 | orchestrator | 2026-01-17 00:59:01.618809 | orchestrator | TASK [ceph-osd : Include crush_rules.yml] ************************************** 2026-01-17 00:59:01.618812 | orchestrator | Saturday 17 January 2026 00:56:08 +0000 (0:00:12.401) 0:08:40.708 ****** 2026-01-17 00:59:01.618816 | orchestrator | skipping: [testbed-node-3] 2026-01-17 00:59:01.618820 | orchestrator | skipping: [testbed-node-4] 2026-01-17 00:59:01.618823 | orchestrator | skipping: [testbed-node-5] 2026-01-17 00:59:01.618827 | orchestrator | 2026-01-17 00:59:01.618831 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-01-17 00:59:01.618835 | orchestrator | Saturday 17 January 2026 00:56:09 +0000 (0:00:01.090) 0:08:41.799 ****** 2026-01-17 00:59:01.618838 | orchestrator | skipping: [testbed-node-3] 2026-01-17 00:59:01.618845 | orchestrator | skipping: [testbed-node-4] 2026-01-17 00:59:01.618849 | orchestrator | skipping: [testbed-node-5] 2026-01-17 00:59:01.618852 | orchestrator | 2026-01-17 00:59:01.618856 | orchestrator | RUNNING HANDLER [ceph-handler : Osds handler] ********************************** 2026-01-17 00:59:01.618860 | orchestrator | Saturday 17 January 2026 00:56:09 +0000 (0:00:00.386) 0:08:42.185 ****** 2026-01-17 00:59:01.618864 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-17 00:59:01.618867 | orchestrator | 2026-01-17 00:59:01.618871 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact trigger_restart] ********************** 2026-01-17 00:59:01.618875 | orchestrator | Saturday 17 January 2026 00:56:10 +0000 (0:00:00.816) 0:08:43.001 ****** 2026-01-17 00:59:01.618878 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-01-17 00:59:01.618882 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-01-17 00:59:01.618886 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-01-17 00:59:01.618890 | orchestrator | skipping: [testbed-node-3] 2026-01-17 00:59:01.618893 | orchestrator | 2026-01-17 00:59:01.618897 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called before restart] ******** 2026-01-17 00:59:01.618901 | orchestrator | Saturday 17 January 2026 00:56:10 +0000 (0:00:00.406) 0:08:43.408 ****** 2026-01-17 00:59:01.618905 | orchestrator | skipping: [testbed-node-3] 2026-01-17 00:59:01.618923 | orchestrator | skipping: [testbed-node-4] 2026-01-17 00:59:01.618930 | orchestrator | skipping: [testbed-node-5] 2026-01-17 00:59:01.618936 | orchestrator | 2026-01-17 00:59:01.618946 | orchestrator | RUNNING HANDLER [ceph-handler : Unset noup flag] ******************************* 2026-01-17 00:59:01.618951 | orchestrator | Saturday 17 January 2026 00:56:11 +0000 (0:00:00.353) 0:08:43.762 ****** 2026-01-17 00:59:01.618957 | orchestrator | skipping: [testbed-node-3] 2026-01-17 00:59:01.618964 | orchestrator | 2026-01-17 00:59:01.618970 | orchestrator | RUNNING HANDLER [ceph-handler : Copy osd restart script] *********************** 2026-01-17 00:59:01.618977 | orchestrator | Saturday 17 January 2026 00:56:11 +0000 (0:00:00.252) 0:08:44.014 ****** 2026-01-17 00:59:01.618983 | orchestrator | skipping: [testbed-node-3] 2026-01-17 00:59:01.618990 | orchestrator | skipping: [testbed-node-4] 2026-01-17 00:59:01.618996 | orchestrator | skipping: [testbed-node-5] 2026-01-17 00:59:01.619003 | orchestrator | 2026-01-17 00:59:01.619009 | orchestrator | RUNNING HANDLER [ceph-handler : Get pool list] ********************************* 2026-01-17 00:59:01.619016 | orchestrator | Saturday 17 January 2026 00:56:11 +0000 (0:00:00.339) 0:08:44.354 ****** 2026-01-17 00:59:01.619022 | orchestrator | skipping: [testbed-node-3] 2026-01-17 00:59:01.619029 | orchestrator | 2026-01-17 00:59:01.619035 | orchestrator | RUNNING HANDLER [ceph-handler : Get balancer module status] ******************** 2026-01-17 00:59:01.619042 | orchestrator | Saturday 17 January 2026 00:56:12 +0000 (0:00:00.249) 0:08:44.603 ****** 2026-01-17 00:59:01.619049 | orchestrator | skipping: [testbed-node-3] 2026-01-17 00:59:01.619055 | orchestrator | 2026-01-17 00:59:01.619061 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact pools_pgautoscaler_mode] ************** 2026-01-17 00:59:01.619067 | orchestrator | Saturday 17 January 2026 00:56:12 +0000 (0:00:00.210) 0:08:44.814 ****** 2026-01-17 00:59:01.619071 | orchestrator | skipping: [testbed-node-3] 2026-01-17 00:59:01.619074 | orchestrator | 2026-01-17 00:59:01.619078 | orchestrator | RUNNING HANDLER [ceph-handler : Disable balancer] ****************************** 2026-01-17 00:59:01.619085 | orchestrator | Saturday 17 January 2026 00:56:12 +0000 (0:00:00.112) 0:08:44.927 ****** 2026-01-17 00:59:01.619089 | orchestrator | skipping: [testbed-node-3] 2026-01-17 00:59:01.619093 | orchestrator | 2026-01-17 00:59:01.619096 | orchestrator | RUNNING HANDLER [ceph-handler : Disable pg autoscale on pools] ***************** 2026-01-17 00:59:01.619100 | orchestrator | Saturday 17 January 2026 00:56:13 +0000 (0:00:00.859) 0:08:45.786 ****** 2026-01-17 00:59:01.619104 | orchestrator | skipping: [testbed-node-3] 2026-01-17 00:59:01.619108 | orchestrator | 2026-01-17 00:59:01.619111 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph osds daemon(s)] ******************* 2026-01-17 00:59:01.619115 | orchestrator | Saturday 17 January 2026 00:56:13 +0000 (0:00:00.194) 0:08:45.981 ****** 2026-01-17 00:59:01.619123 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-01-17 00:59:01.619127 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-01-17 00:59:01.619130 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-01-17 00:59:01.619134 | orchestrator | skipping: [testbed-node-3] 2026-01-17 00:59:01.619138 | orchestrator | 2026-01-17 00:59:01.619142 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called after restart] ********* 2026-01-17 00:59:01.619145 | orchestrator | Saturday 17 January 2026 00:56:13 +0000 (0:00:00.427) 0:08:46.408 ****** 2026-01-17 00:59:01.619149 | orchestrator | skipping: [testbed-node-3] 2026-01-17 00:59:01.619153 | orchestrator | skipping: [testbed-node-4] 2026-01-17 00:59:01.619157 | orchestrator | skipping: [testbed-node-5] 2026-01-17 00:59:01.619160 | orchestrator | 2026-01-17 00:59:01.619164 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable pg autoscale on pools] *************** 2026-01-17 00:59:01.619168 | orchestrator | Saturday 17 January 2026 00:56:14 +0000 (0:00:00.321) 0:08:46.729 ****** 2026-01-17 00:59:01.619172 | orchestrator | skipping: [testbed-node-3] 2026-01-17 00:59:01.619175 | orchestrator | 2026-01-17 00:59:01.619179 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable balancer] **************************** 2026-01-17 00:59:01.619183 | orchestrator | Saturday 17 January 2026 00:56:14 +0000 (0:00:00.243) 0:08:46.973 ****** 2026-01-17 00:59:01.619187 | orchestrator | skipping: [testbed-node-3] 2026-01-17 00:59:01.619190 | orchestrator | 2026-01-17 00:59:01.619194 | orchestrator | PLAY [Apply role ceph-crash] *************************************************** 2026-01-17 00:59:01.619198 | orchestrator | 2026-01-17 00:59:01.619202 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-01-17 00:59:01.619205 | orchestrator | Saturday 17 January 2026 00:56:15 +0000 (0:00:00.946) 0:08:47.919 ****** 2026-01-17 00:59:01.619209 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-17 00:59:01.619214 | orchestrator | 2026-01-17 00:59:01.619218 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-01-17 00:59:01.619221 | orchestrator | Saturday 17 January 2026 00:56:16 +0000 (0:00:01.208) 0:08:49.128 ****** 2026-01-17 00:59:01.619225 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-17 00:59:01.619229 | orchestrator | 2026-01-17 00:59:01.619233 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-01-17 00:59:01.619237 | orchestrator | Saturday 17 January 2026 00:56:17 +0000 (0:00:01.198) 0:08:50.327 ****** 2026-01-17 00:59:01.619240 | orchestrator | skipping: [testbed-node-3] 2026-01-17 00:59:01.619244 | orchestrator | skipping: [testbed-node-4] 2026-01-17 00:59:01.619248 | orchestrator | skipping: [testbed-node-5] 2026-01-17 00:59:01.619252 | orchestrator | ok: [testbed-node-0] 2026-01-17 00:59:01.619255 | orchestrator | ok: [testbed-node-1] 2026-01-17 00:59:01.619259 | orchestrator | ok: [testbed-node-2] 2026-01-17 00:59:01.619263 | orchestrator | 2026-01-17 00:59:01.619267 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-01-17 00:59:01.619270 | orchestrator | Saturday 17 January 2026 00:56:18 +0000 (0:00:01.054) 0:08:51.381 ****** 2026-01-17 00:59:01.619274 | orchestrator | skipping: [testbed-node-0] 2026-01-17 00:59:01.619278 | orchestrator | skipping: [testbed-node-1] 2026-01-17 00:59:01.619282 | orchestrator | ok: [testbed-node-3] 2026-01-17 00:59:01.619285 | orchestrator | skipping: [testbed-node-2] 2026-01-17 00:59:01.619289 | orchestrator | ok: [testbed-node-4] 2026-01-17 00:59:01.619293 | orchestrator | ok: [testbed-node-5] 2026-01-17 00:59:01.619297 | orchestrator | 2026-01-17 00:59:01.619303 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-01-17 00:59:01.619307 | orchestrator | Saturday 17 January 2026 00:56:19 +0000 (0:00:00.743) 0:08:52.124 ****** 2026-01-17 00:59:01.619313 | orchestrator | ok: [testbed-node-3] 2026-01-17 00:59:01.619317 | orchestrator | skipping: [testbed-node-0] 2026-01-17 00:59:01.619321 | orchestrator | skipping: [testbed-node-1] 2026-01-17 00:59:01.619324 | orchestrator | ok: [testbed-node-4] 2026-01-17 00:59:01.619328 | orchestrator | skipping: [testbed-node-2] 2026-01-17 00:59:01.619332 | orchestrator | ok: [testbed-node-5] 2026-01-17 00:59:01.619336 | orchestrator | 2026-01-17 00:59:01.619339 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-01-17 00:59:01.619343 | orchestrator | Saturday 17 January 2026 00:56:20 +0000 (0:00:01.101) 0:08:53.226 ****** 2026-01-17 00:59:01.619347 | orchestrator | skipping: [testbed-node-0] 2026-01-17 00:59:01.619351 | orchestrator | skipping: [testbed-node-1] 2026-01-17 00:59:01.619354 | orchestrator | ok: [testbed-node-3] 2026-01-17 00:59:01.619358 | orchestrator | skipping: [testbed-node-2] 2026-01-17 00:59:01.619362 | orchestrator | ok: [testbed-node-4] 2026-01-17 00:59:01.619367 | orchestrator | ok: [testbed-node-5] 2026-01-17 00:59:01.619373 | orchestrator | 2026-01-17 00:59:01.619379 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-01-17 00:59:01.619385 | orchestrator | Saturday 17 January 2026 00:56:21 +0000 (0:00:00.740) 0:08:53.967 ****** 2026-01-17 00:59:01.619392 | orchestrator | skipping: [testbed-node-3] 2026-01-17 00:59:01.619396 | orchestrator | skipping: [testbed-node-4] 2026-01-17 00:59:01.619400 | orchestrator | skipping: [testbed-node-5] 2026-01-17 00:59:01.619403 | orchestrator | ok: [testbed-node-0] 2026-01-17 00:59:01.619407 | orchestrator | ok: [testbed-node-1] 2026-01-17 00:59:01.619411 | orchestrator | ok: [testbed-node-2] 2026-01-17 00:59:01.619414 | orchestrator | 2026-01-17 00:59:01.619418 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-01-17 00:59:01.619424 | orchestrator | Saturday 17 January 2026 00:56:22 +0000 (0:00:01.305) 0:08:55.273 ****** 2026-01-17 00:59:01.619428 | orchestrator | skipping: [testbed-node-3] 2026-01-17 00:59:01.619432 | orchestrator | skipping: [testbed-node-4] 2026-01-17 00:59:01.619436 | orchestrator | skipping: [testbed-node-5] 2026-01-17 00:59:01.619439 | orchestrator | skipping: [testbed-node-0] 2026-01-17 00:59:01.619443 | orchestrator | skipping: [testbed-node-1] 2026-01-17 00:59:01.619447 | orchestrator | skipping: [testbed-node-2] 2026-01-17 00:59:01.619450 | orchestrator | 2026-01-17 00:59:01.619454 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-01-17 00:59:01.619458 | orchestrator | Saturday 17 January 2026 00:56:23 +0000 (0:00:00.649) 0:08:55.922 ****** 2026-01-17 00:59:01.619462 | orchestrator | skipping: [testbed-node-3] 2026-01-17 00:59:01.619465 | orchestrator | skipping: [testbed-node-4] 2026-01-17 00:59:01.619469 | orchestrator | skipping: [testbed-node-5] 2026-01-17 00:59:01.619473 | orchestrator | skipping: [testbed-node-0] 2026-01-17 00:59:01.619476 | orchestrator | skipping: [testbed-node-1] 2026-01-17 00:59:01.619480 | orchestrator | skipping: [testbed-node-2] 2026-01-17 00:59:01.619484 | orchestrator | 2026-01-17 00:59:01.619487 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-01-17 00:59:01.619491 | orchestrator | Saturday 17 January 2026 00:56:24 +0000 (0:00:00.903) 0:08:56.826 ****** 2026-01-17 00:59:01.619495 | orchestrator | ok: [testbed-node-3] 2026-01-17 00:59:01.619498 | orchestrator | ok: [testbed-node-4] 2026-01-17 00:59:01.619502 | orchestrator | ok: [testbed-node-5] 2026-01-17 00:59:01.619506 | orchestrator | ok: [testbed-node-0] 2026-01-17 00:59:01.619509 | orchestrator | ok: [testbed-node-1] 2026-01-17 00:59:01.619513 | orchestrator | ok: [testbed-node-2] 2026-01-17 00:59:01.619517 | orchestrator | 2026-01-17 00:59:01.619520 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-01-17 00:59:01.619524 | orchestrator | Saturday 17 January 2026 00:56:25 +0000 (0:00:01.126) 0:08:57.953 ****** 2026-01-17 00:59:01.619528 | orchestrator | ok: [testbed-node-3] 2026-01-17 00:59:01.619532 | orchestrator | ok: [testbed-node-4] 2026-01-17 00:59:01.619535 | orchestrator | ok: [testbed-node-0] 2026-01-17 00:59:01.619539 | orchestrator | ok: [testbed-node-5] 2026-01-17 00:59:01.619543 | orchestrator | ok: [testbed-node-1] 2026-01-17 00:59:01.619549 | orchestrator | ok: [testbed-node-2] 2026-01-17 00:59:01.619553 | orchestrator | 2026-01-17 00:59:01.619556 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-01-17 00:59:01.619560 | orchestrator | Saturday 17 January 2026 00:56:26 +0000 (0:00:01.423) 0:08:59.376 ****** 2026-01-17 00:59:01.619564 | orchestrator | skipping: [testbed-node-3] 2026-01-17 00:59:01.619568 | orchestrator | skipping: [testbed-node-4] 2026-01-17 00:59:01.619571 | orchestrator | skipping: [testbed-node-5] 2026-01-17 00:59:01.619575 | orchestrator | skipping: [testbed-node-0] 2026-01-17 00:59:01.619579 | orchestrator | skipping: [testbed-node-1] 2026-01-17 00:59:01.619582 | orchestrator | skipping: [testbed-node-2] 2026-01-17 00:59:01.619586 | orchestrator | 2026-01-17 00:59:01.619590 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-01-17 00:59:01.619593 | orchestrator | Saturday 17 January 2026 00:56:27 +0000 (0:00:00.605) 0:08:59.982 ****** 2026-01-17 00:59:01.619597 | orchestrator | skipping: [testbed-node-3] 2026-01-17 00:59:01.619601 | orchestrator | skipping: [testbed-node-4] 2026-01-17 00:59:01.619604 | orchestrator | skipping: [testbed-node-5] 2026-01-17 00:59:01.619608 | orchestrator | ok: [testbed-node-0] 2026-01-17 00:59:01.619612 | orchestrator | ok: [testbed-node-1] 2026-01-17 00:59:01.619615 | orchestrator | ok: [testbed-node-2] 2026-01-17 00:59:01.619619 | orchestrator | 2026-01-17 00:59:01.619623 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-01-17 00:59:01.619627 | orchestrator | Saturday 17 January 2026 00:56:28 +0000 (0:00:00.942) 0:09:00.925 ****** 2026-01-17 00:59:01.619630 | orchestrator | ok: [testbed-node-3] 2026-01-17 00:59:01.619634 | orchestrator | ok: [testbed-node-4] 2026-01-17 00:59:01.619638 | orchestrator | ok: [testbed-node-5] 2026-01-17 00:59:01.619641 | orchestrator | skipping: [testbed-node-0] 2026-01-17 00:59:01.619645 | orchestrator | skipping: [testbed-node-1] 2026-01-17 00:59:01.619649 | orchestrator | skipping: [testbed-node-2] 2026-01-17 00:59:01.619653 | orchestrator | 2026-01-17 00:59:01.619656 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-01-17 00:59:01.619660 | orchestrator | Saturday 17 January 2026 00:56:28 +0000 (0:00:00.593) 0:09:01.518 ****** 2026-01-17 00:59:01.619664 | orchestrator | ok: [testbed-node-3] 2026-01-17 00:59:01.619667 | orchestrator | ok: [testbed-node-4] 2026-01-17 00:59:01.619671 | orchestrator | ok: [testbed-node-5] 2026-01-17 00:59:01.619675 | orchestrator | skipping: [testbed-node-0] 2026-01-17 00:59:01.619678 | orchestrator | skipping: [testbed-node-1] 2026-01-17 00:59:01.619682 | orchestrator | skipping: [testbed-node-2] 2026-01-17 00:59:01.619686 | orchestrator | 2026-01-17 00:59:01.619692 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-01-17 00:59:01.619696 | orchestrator | Saturday 17 January 2026 00:56:29 +0000 (0:00:00.885) 0:09:02.404 ****** 2026-01-17 00:59:01.619700 | orchestrator | ok: [testbed-node-3] 2026-01-17 00:59:01.619704 | orchestrator | ok: [testbed-node-4] 2026-01-17 00:59:01.619707 | orchestrator | ok: [testbed-node-5] 2026-01-17 00:59:01.619711 | orchestrator | skipping: [testbed-node-0] 2026-01-17 00:59:01.619715 | orchestrator | skipping: [testbed-node-1] 2026-01-17 00:59:01.619718 | orchestrator | skipping: [testbed-node-2] 2026-01-17 00:59:01.619722 | orchestrator | 2026-01-17 00:59:01.619726 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-01-17 00:59:01.619730 | orchestrator | Saturday 17 January 2026 00:56:30 +0000 (0:00:00.605) 0:09:03.009 ****** 2026-01-17 00:59:01.619733 | orchestrator | skipping: [testbed-node-3] 2026-01-17 00:59:01.619737 | orchestrator | skipping: [testbed-node-4] 2026-01-17 00:59:01.619741 | orchestrator | skipping: [testbed-node-5] 2026-01-17 00:59:01.619744 | orchestrator | skipping: [testbed-node-0] 2026-01-17 00:59:01.619748 | orchestrator | skipping: [testbed-node-1] 2026-01-17 00:59:01.619752 | orchestrator | skipping: [testbed-node-2] 2026-01-17 00:59:01.619755 | orchestrator | 2026-01-17 00:59:01.619759 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-01-17 00:59:01.619763 | orchestrator | Saturday 17 January 2026 00:56:31 +0000 (0:00:00.815) 0:09:03.825 ****** 2026-01-17 00:59:01.619769 | orchestrator | skipping: [testbed-node-3] 2026-01-17 00:59:01.619773 | orchestrator | skipping: [testbed-node-4] 2026-01-17 00:59:01.619776 | orchestrator | skipping: [testbed-node-5] 2026-01-17 00:59:01.619780 | orchestrator | skipping: [testbed-node-0] 2026-01-17 00:59:01.619784 | orchestrator | skipping: [testbed-node-1] 2026-01-17 00:59:01.619787 | orchestrator | skipping: [testbed-node-2] 2026-01-17 00:59:01.619791 | orchestrator | 2026-01-17 00:59:01.619795 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-01-17 00:59:01.619802 | orchestrator | Saturday 17 January 2026 00:56:31 +0000 (0:00:00.611) 0:09:04.436 ****** 2026-01-17 00:59:01.619806 | orchestrator | skipping: [testbed-node-3] 2026-01-17 00:59:01.619810 | orchestrator | skipping: [testbed-node-4] 2026-01-17 00:59:01.619814 | orchestrator | skipping: [testbed-node-5] 2026-01-17 00:59:01.619817 | orchestrator | ok: [testbed-node-0] 2026-01-17 00:59:01.619821 | orchestrator | ok: [testbed-node-1] 2026-01-17 00:59:01.619825 | orchestrator | ok: [testbed-node-2] 2026-01-17 00:59:01.619829 | orchestrator | 2026-01-17 00:59:01.619832 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-01-17 00:59:01.619836 | orchestrator | Saturday 17 January 2026 00:56:32 +0000 (0:00:00.912) 0:09:05.348 ****** 2026-01-17 00:59:01.619840 | orchestrator | ok: [testbed-node-3] 2026-01-17 00:59:01.619844 | orchestrator | ok: [testbed-node-4] 2026-01-17 00:59:01.619847 | orchestrator | ok: [testbed-node-5] 2026-01-17 00:59:01.619851 | orchestrator | ok: [testbed-node-0] 2026-01-17 00:59:01.619855 | orchestrator | ok: [testbed-node-1] 2026-01-17 00:59:01.619858 | orchestrator | ok: [testbed-node-2] 2026-01-17 00:59:01.619862 | orchestrator | 2026-01-17 00:59:01.619866 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-01-17 00:59:01.619870 | orchestrator | Saturday 17 January 2026 00:56:33 +0000 (0:00:00.671) 0:09:06.019 ****** 2026-01-17 00:59:01.619873 | orchestrator | ok: [testbed-node-3] 2026-01-17 00:59:01.619877 | orchestrator | ok: [testbed-node-4] 2026-01-17 00:59:01.619881 | orchestrator | ok: [testbed-node-5] 2026-01-17 00:59:01.619884 | orchestrator | ok: [testbed-node-0] 2026-01-17 00:59:01.619888 | orchestrator | ok: [testbed-node-1] 2026-01-17 00:59:01.619892 | orchestrator | ok: [testbed-node-2] 2026-01-17 00:59:01.619897 | orchestrator | 2026-01-17 00:59:01.619903 | orchestrator | TASK [ceph-crash : Create client.crash keyring] ******************************** 2026-01-17 00:59:01.619921 | orchestrator | Saturday 17 January 2026 00:56:34 +0000 (0:00:01.310) 0:09:07.330 ****** 2026-01-17 00:59:01.619927 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-01-17 00:59:01.619934 | orchestrator | 2026-01-17 00:59:01.619940 | orchestrator | TASK [ceph-crash : Get keys from monitors] ************************************* 2026-01-17 00:59:01.619946 | orchestrator | Saturday 17 January 2026 00:56:38 +0000 (0:00:03.776) 0:09:11.107 ****** 2026-01-17 00:59:01.619952 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-01-17 00:59:01.619957 | orchestrator | 2026-01-17 00:59:01.619963 | orchestrator | TASK [ceph-crash : Copy ceph key(s) if needed] ********************************* 2026-01-17 00:59:01.619969 | orchestrator | Saturday 17 January 2026 00:56:40 +0000 (0:00:01.988) 0:09:13.096 ****** 2026-01-17 00:59:01.619975 | orchestrator | changed: [testbed-node-3] 2026-01-17 00:59:01.619982 | orchestrator | changed: [testbed-node-4] 2026-01-17 00:59:01.619989 | orchestrator | changed: [testbed-node-5] 2026-01-17 00:59:01.619995 | orchestrator | ok: [testbed-node-0] 2026-01-17 00:59:01.620001 | orchestrator | changed: [testbed-node-1] 2026-01-17 00:59:01.620008 | orchestrator | changed: [testbed-node-2] 2026-01-17 00:59:01.620014 | orchestrator | 2026-01-17 00:59:01.620021 | orchestrator | TASK [ceph-crash : Create /var/lib/ceph/crash/posted] ************************** 2026-01-17 00:59:01.620027 | orchestrator | Saturday 17 January 2026 00:56:42 +0000 (0:00:01.921) 0:09:15.018 ****** 2026-01-17 00:59:01.620034 | orchestrator | changed: [testbed-node-3] 2026-01-17 00:59:01.620040 | orchestrator | changed: [testbed-node-4] 2026-01-17 00:59:01.620046 | orchestrator | changed: [testbed-node-5] 2026-01-17 00:59:01.620056 | orchestrator | changed: [testbed-node-0] 2026-01-17 00:59:01.620062 | orchestrator | changed: [testbed-node-1] 2026-01-17 00:59:01.620069 | orchestrator | changed: [testbed-node-2] 2026-01-17 00:59:01.620075 | orchestrator | 2026-01-17 00:59:01.620082 | orchestrator | TASK [ceph-crash : Include_tasks systemd.yml] ********************************** 2026-01-17 00:59:01.620088 | orchestrator | Saturday 17 January 2026 00:56:43 +0000 (0:00:01.067) 0:09:16.085 ****** 2026-01-17 00:59:01.620095 | orchestrator | included: /ansible/roles/ceph-crash/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-17 00:59:01.620101 | orchestrator | 2026-01-17 00:59:01.620108 | orchestrator | TASK [ceph-crash : Generate systemd unit file for ceph-crash container] ******** 2026-01-17 00:59:01.620115 | orchestrator | Saturday 17 January 2026 00:56:44 +0000 (0:00:01.265) 0:09:17.351 ****** 2026-01-17 00:59:01.620121 | orchestrator | changed: [testbed-node-4] 2026-01-17 00:59:01.620128 | orchestrator | changed: [testbed-node-3] 2026-01-17 00:59:01.620134 | orchestrator | changed: [testbed-node-5] 2026-01-17 00:59:01.620141 | orchestrator | changed: [testbed-node-0] 2026-01-17 00:59:01.620150 | orchestrator | changed: [testbed-node-1] 2026-01-17 00:59:01.620157 | orchestrator | changed: [testbed-node-2] 2026-01-17 00:59:01.620163 | orchestrator | 2026-01-17 00:59:01.620170 | orchestrator | TASK [ceph-crash : Start the ceph-crash service] ******************************* 2026-01-17 00:59:01.620176 | orchestrator | Saturday 17 January 2026 00:56:46 +0000 (0:00:01.829) 0:09:19.181 ****** 2026-01-17 00:59:01.620182 | orchestrator | changed: [testbed-node-3] 2026-01-17 00:59:01.620187 | orchestrator | changed: [testbed-node-4] 2026-01-17 00:59:01.620190 | orchestrator | changed: [testbed-node-0] 2026-01-17 00:59:01.620194 | orchestrator | changed: [testbed-node-1] 2026-01-17 00:59:01.620198 | orchestrator | changed: [testbed-node-5] 2026-01-17 00:59:01.620202 | orchestrator | changed: [testbed-node-2] 2026-01-17 00:59:01.620205 | orchestrator | 2026-01-17 00:59:01.620209 | orchestrator | RUNNING HANDLER [ceph-handler : Ceph crash handler] **************************** 2026-01-17 00:59:01.620213 | orchestrator | Saturday 17 January 2026 00:56:50 +0000 (0:00:03.557) 0:09:22.738 ****** 2026-01-17 00:59:01.620217 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_crash.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-17 00:59:01.620221 | orchestrator | 2026-01-17 00:59:01.620224 | orchestrator | RUNNING HANDLER [ceph-handler : Set _crash_handler_called before restart] ****** 2026-01-17 00:59:01.620228 | orchestrator | Saturday 17 January 2026 00:56:51 +0000 (0:00:01.405) 0:09:24.143 ****** 2026-01-17 00:59:01.620232 | orchestrator | ok: [testbed-node-3] 2026-01-17 00:59:01.620236 | orchestrator | ok: [testbed-node-4] 2026-01-17 00:59:01.620239 | orchestrator | ok: [testbed-node-5] 2026-01-17 00:59:01.620243 | orchestrator | ok: [testbed-node-0] 2026-01-17 00:59:01.620249 | orchestrator | ok: [testbed-node-1] 2026-01-17 00:59:01.620255 | orchestrator | ok: [testbed-node-2] 2026-01-17 00:59:01.620262 | orchestrator | 2026-01-17 00:59:01.620268 | orchestrator | RUNNING HANDLER [ceph-handler : Restart the ceph-crash service] **************** 2026-01-17 00:59:01.620279 | orchestrator | Saturday 17 January 2026 00:56:52 +0000 (0:00:00.922) 0:09:25.065 ****** 2026-01-17 00:59:01.620285 | orchestrator | changed: [testbed-node-3] 2026-01-17 00:59:01.620291 | orchestrator | changed: [testbed-node-4] 2026-01-17 00:59:01.620297 | orchestrator | changed: [testbed-node-5] 2026-01-17 00:59:01.620303 | orchestrator | changed: [testbed-node-0] 2026-01-17 00:59:01.620309 | orchestrator | changed: [testbed-node-1] 2026-01-17 00:59:01.620315 | orchestrator | changed: [testbed-node-2] 2026-01-17 00:59:01.620320 | orchestrator | 2026-01-17 00:59:01.620327 | orchestrator | RUNNING HANDLER [ceph-handler : Set _crash_handler_called after restart] ******* 2026-01-17 00:59:01.620334 | orchestrator | Saturday 17 January 2026 00:56:55 +0000 (0:00:02.556) 0:09:27.621 ****** 2026-01-17 00:59:01.620341 | orchestrator | ok: [testbed-node-3] 2026-01-17 00:59:01.620347 | orchestrator | ok: [testbed-node-4] 2026-01-17 00:59:01.620355 | orchestrator | ok: [testbed-node-5] 2026-01-17 00:59:01.620363 | orchestrator | ok: [testbed-node-0] 2026-01-17 00:59:01.620368 | orchestrator | ok: [testbed-node-1] 2026-01-17 00:59:01.620374 | orchestrator | ok: [testbed-node-2] 2026-01-17 00:59:01.620380 | orchestrator | 2026-01-17 00:59:01.620386 | orchestrator | PLAY [Apply role ceph-mds] ***************************************************** 2026-01-17 00:59:01.620393 | orchestrator | 2026-01-17 00:59:01.620399 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-01-17 00:59:01.620405 | orchestrator | Saturday 17 January 2026 00:56:56 +0000 (0:00:01.354) 0:09:28.976 ****** 2026-01-17 00:59:01.620413 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-17 00:59:01.620420 | orchestrator | 2026-01-17 00:59:01.620427 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-01-17 00:59:01.620433 | orchestrator | Saturday 17 January 2026 00:56:57 +0000 (0:00:00.596) 0:09:29.573 ****** 2026-01-17 00:59:01.620439 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-5, testbed-node-4 2026-01-17 00:59:01.620446 | orchestrator | 2026-01-17 00:59:01.620451 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-01-17 00:59:01.620455 | orchestrator | Saturday 17 January 2026 00:56:57 +0000 (0:00:00.789) 0:09:30.362 ****** 2026-01-17 00:59:01.620459 | orchestrator | skipping: [testbed-node-3] 2026-01-17 00:59:01.620462 | orchestrator | skipping: [testbed-node-4] 2026-01-17 00:59:01.620466 | orchestrator | skipping: [testbed-node-5] 2026-01-17 00:59:01.620470 | orchestrator | 2026-01-17 00:59:01.620474 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-01-17 00:59:01.620477 | orchestrator | Saturday 17 January 2026 00:56:58 +0000 (0:00:00.325) 0:09:30.688 ****** 2026-01-17 00:59:01.620481 | orchestrator | ok: [testbed-node-3] 2026-01-17 00:59:01.620485 | orchestrator | ok: [testbed-node-4] 2026-01-17 00:59:01.620488 | orchestrator | ok: [testbed-node-5] 2026-01-17 00:59:01.620492 | orchestrator | 2026-01-17 00:59:01.620496 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-01-17 00:59:01.620499 | orchestrator | Saturday 17 January 2026 00:56:58 +0000 (0:00:00.774) 0:09:31.463 ****** 2026-01-17 00:59:01.620503 | orchestrator | ok: [testbed-node-3] 2026-01-17 00:59:01.620507 | orchestrator | ok: [testbed-node-4] 2026-01-17 00:59:01.620511 | orchestrator | ok: [testbed-node-5] 2026-01-17 00:59:01.620514 | orchestrator | 2026-01-17 00:59:01.620518 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-01-17 00:59:01.620522 | orchestrator | Saturday 17 January 2026 00:57:00 +0000 (0:00:01.150) 0:09:32.613 ****** 2026-01-17 00:59:01.620525 | orchestrator | ok: [testbed-node-3] 2026-01-17 00:59:01.620529 | orchestrator | ok: [testbed-node-4] 2026-01-17 00:59:01.620533 | orchestrator | ok: [testbed-node-5] 2026-01-17 00:59:01.620536 | orchestrator | 2026-01-17 00:59:01.620540 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-01-17 00:59:01.620544 | orchestrator | Saturday 17 January 2026 00:57:00 +0000 (0:00:00.783) 0:09:33.397 ****** 2026-01-17 00:59:01.620548 | orchestrator | skipping: [testbed-node-3] 2026-01-17 00:59:01.620551 | orchestrator | skipping: [testbed-node-4] 2026-01-17 00:59:01.620555 | orchestrator | skipping: [testbed-node-5] 2026-01-17 00:59:01.620559 | orchestrator | 2026-01-17 00:59:01.620562 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-01-17 00:59:01.620566 | orchestrator | Saturday 17 January 2026 00:57:01 +0000 (0:00:00.357) 0:09:33.755 ****** 2026-01-17 00:59:01.620570 | orchestrator | skipping: [testbed-node-3] 2026-01-17 00:59:01.620577 | orchestrator | skipping: [testbed-node-4] 2026-01-17 00:59:01.620581 | orchestrator | skipping: [testbed-node-5] 2026-01-17 00:59:01.620584 | orchestrator | 2026-01-17 00:59:01.620588 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-01-17 00:59:01.620592 | orchestrator | Saturday 17 January 2026 00:57:01 +0000 (0:00:00.369) 0:09:34.124 ****** 2026-01-17 00:59:01.620595 | orchestrator | skipping: [testbed-node-3] 2026-01-17 00:59:01.620602 | orchestrator | skipping: [testbed-node-4] 2026-01-17 00:59:01.620606 | orchestrator | skipping: [testbed-node-5] 2026-01-17 00:59:01.620610 | orchestrator | 2026-01-17 00:59:01.620613 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-01-17 00:59:01.620617 | orchestrator | Saturday 17 January 2026 00:57:02 +0000 (0:00:00.625) 0:09:34.750 ****** 2026-01-17 00:59:01.620621 | orchestrator | ok: [testbed-node-3] 2026-01-17 00:59:01.620624 | orchestrator | ok: [testbed-node-5] 2026-01-17 00:59:01.620628 | orchestrator | ok: [testbed-node-4] 2026-01-17 00:59:01.620632 | orchestrator | 2026-01-17 00:59:01.620636 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-01-17 00:59:01.620639 | orchestrator | Saturday 17 January 2026 00:57:03 +0000 (0:00:00.861) 0:09:35.612 ****** 2026-01-17 00:59:01.620643 | orchestrator | ok: [testbed-node-3] 2026-01-17 00:59:01.620647 | orchestrator | ok: [testbed-node-4] 2026-01-17 00:59:01.620650 | orchestrator | ok: [testbed-node-5] 2026-01-17 00:59:01.620654 | orchestrator | 2026-01-17 00:59:01.620658 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-01-17 00:59:01.620661 | orchestrator | Saturday 17 January 2026 00:57:03 +0000 (0:00:00.803) 0:09:36.415 ****** 2026-01-17 00:59:01.620665 | orchestrator | skipping: [testbed-node-3] 2026-01-17 00:59:01.620669 | orchestrator | skipping: [testbed-node-4] 2026-01-17 00:59:01.620672 | orchestrator | skipping: [testbed-node-5] 2026-01-17 00:59:01.620676 | orchestrator | 2026-01-17 00:59:01.620680 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-01-17 00:59:01.620688 | orchestrator | Saturday 17 January 2026 00:57:04 +0000 (0:00:00.332) 0:09:36.748 ****** 2026-01-17 00:59:01.620691 | orchestrator | skipping: [testbed-node-3] 2026-01-17 00:59:01.620695 | orchestrator | skipping: [testbed-node-4] 2026-01-17 00:59:01.620699 | orchestrator | skipping: [testbed-node-5] 2026-01-17 00:59:01.620702 | orchestrator | 2026-01-17 00:59:01.620706 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-01-17 00:59:01.620711 | orchestrator | Saturday 17 January 2026 00:57:04 +0000 (0:00:00.613) 0:09:37.362 ****** 2026-01-17 00:59:01.620717 | orchestrator | ok: [testbed-node-3] 2026-01-17 00:59:01.620723 | orchestrator | ok: [testbed-node-4] 2026-01-17 00:59:01.620728 | orchestrator | ok: [testbed-node-5] 2026-01-17 00:59:01.620734 | orchestrator | 2026-01-17 00:59:01.620741 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-01-17 00:59:01.620747 | orchestrator | Saturday 17 January 2026 00:57:05 +0000 (0:00:00.362) 0:09:37.725 ****** 2026-01-17 00:59:01.620753 | orchestrator | ok: [testbed-node-3] 2026-01-17 00:59:01.620759 | orchestrator | ok: [testbed-node-4] 2026-01-17 00:59:01.620766 | orchestrator | ok: [testbed-node-5] 2026-01-17 00:59:01.620772 | orchestrator | 2026-01-17 00:59:01.620779 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-01-17 00:59:01.620785 | orchestrator | Saturday 17 January 2026 00:57:05 +0000 (0:00:00.364) 0:09:38.089 ****** 2026-01-17 00:59:01.620792 | orchestrator | ok: [testbed-node-3] 2026-01-17 00:59:01.620796 | orchestrator | ok: [testbed-node-4] 2026-01-17 00:59:01.620800 | orchestrator | ok: [testbed-node-5] 2026-01-17 00:59:01.620804 | orchestrator | 2026-01-17 00:59:01.620808 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-01-17 00:59:01.620811 | orchestrator | Saturday 17 January 2026 00:57:05 +0000 (0:00:00.363) 0:09:38.452 ****** 2026-01-17 00:59:01.620815 | orchestrator | skipping: [testbed-node-3] 2026-01-17 00:59:01.620819 | orchestrator | skipping: [testbed-node-4] 2026-01-17 00:59:01.620823 | orchestrator | skipping: [testbed-node-5] 2026-01-17 00:59:01.620827 | orchestrator | 2026-01-17 00:59:01.620830 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-01-17 00:59:01.620834 | orchestrator | Saturday 17 January 2026 00:57:06 +0000 (0:00:00.651) 0:09:39.104 ****** 2026-01-17 00:59:01.620838 | orchestrator | skipping: [testbed-node-3] 2026-01-17 00:59:01.620842 | orchestrator | skipping: [testbed-node-4] 2026-01-17 00:59:01.620845 | orchestrator | skipping: [testbed-node-5] 2026-01-17 00:59:01.620855 | orchestrator | 2026-01-17 00:59:01.620859 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-01-17 00:59:01.620863 | orchestrator | Saturday 17 January 2026 00:57:06 +0000 (0:00:00.310) 0:09:39.414 ****** 2026-01-17 00:59:01.620866 | orchestrator | skipping: [testbed-node-3] 2026-01-17 00:59:01.620870 | orchestrator | skipping: [testbed-node-4] 2026-01-17 00:59:01.620874 | orchestrator | skipping: [testbed-node-5] 2026-01-17 00:59:01.620878 | orchestrator | 2026-01-17 00:59:01.620881 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-01-17 00:59:01.620885 | orchestrator | Saturday 17 January 2026 00:57:07 +0000 (0:00:00.335) 0:09:39.750 ****** 2026-01-17 00:59:01.620889 | orchestrator | ok: [testbed-node-3] 2026-01-17 00:59:01.620893 | orchestrator | ok: [testbed-node-4] 2026-01-17 00:59:01.620897 | orchestrator | ok: [testbed-node-5] 2026-01-17 00:59:01.620900 | orchestrator | 2026-01-17 00:59:01.620904 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-01-17 00:59:01.620934 | orchestrator | Saturday 17 January 2026 00:57:07 +0000 (0:00:00.337) 0:09:40.087 ****** 2026-01-17 00:59:01.620939 | orchestrator | ok: [testbed-node-3] 2026-01-17 00:59:01.620943 | orchestrator | ok: [testbed-node-4] 2026-01-17 00:59:01.620946 | orchestrator | ok: [testbed-node-5] 2026-01-17 00:59:01.620950 | orchestrator | 2026-01-17 00:59:01.620954 | orchestrator | TASK [ceph-mds : Include create_mds_filesystems.yml] *************************** 2026-01-17 00:59:01.620957 | orchestrator | Saturday 17 January 2026 00:57:08 +0000 (0:00:00.967) 0:09:41.054 ****** 2026-01-17 00:59:01.620961 | orchestrator | skipping: [testbed-node-4] 2026-01-17 00:59:01.620965 | orchestrator | skipping: [testbed-node-5] 2026-01-17 00:59:01.620969 | orchestrator | included: /ansible/roles/ceph-mds/tasks/create_mds_filesystems.yml for testbed-node-3 2026-01-17 00:59:01.620973 | orchestrator | 2026-01-17 00:59:01.620976 | orchestrator | TASK [ceph-facts : Get current default crush rule details] ********************* 2026-01-17 00:59:01.620980 | orchestrator | Saturday 17 January 2026 00:57:08 +0000 (0:00:00.437) 0:09:41.492 ****** 2026-01-17 00:59:01.620986 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-01-17 00:59:01.620990 | orchestrator | 2026-01-17 00:59:01.620994 | orchestrator | TASK [ceph-facts : Get current default crush rule name] ************************ 2026-01-17 00:59:01.620998 | orchestrator | Saturday 17 January 2026 00:57:11 +0000 (0:00:02.073) 0:09:43.566 ****** 2026-01-17 00:59:01.621002 | orchestrator | skipping: [testbed-node-3] => (item={'rule_id': 0, 'rule_name': 'replicated_rule', 'type': 1, 'steps': [{'op': 'take', 'item': -1, 'item_name': 'default'}, {'op': 'chooseleaf_firstn', 'num': 0, 'type': 'host'}, {'op': 'emit'}]})  2026-01-17 00:59:01.621007 | orchestrator | skipping: [testbed-node-3] 2026-01-17 00:59:01.621011 | orchestrator | 2026-01-17 00:59:01.621015 | orchestrator | TASK [ceph-mds : Create filesystem pools] ************************************** 2026-01-17 00:59:01.621019 | orchestrator | Saturday 17 January 2026 00:57:11 +0000 (0:00:00.174) 0:09:43.740 ****** 2026-01-17 00:59:01.621024 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'application': 'cephfs', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'cephfs_data', 'pg_num': 16, 'pgp_num': 16, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-01-17 00:59:01.621032 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'application': 'cephfs', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'cephfs_metadata', 'pg_num': 16, 'pgp_num': 16, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-01-17 00:59:01.621036 | orchestrator | 2026-01-17 00:59:01.621043 | orchestrator | TASK [ceph-mds : Create ceph filesystem] *************************************** 2026-01-17 00:59:01.621047 | orchestrator | Saturday 17 January 2026 00:57:20 +0000 (0:00:08.809) 0:09:52.550 ****** 2026-01-17 00:59:01.621051 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-01-17 00:59:01.621055 | orchestrator | 2026-01-17 00:59:01.621059 | orchestrator | TASK [ceph-mds : Include common.yml] ******************************************* 2026-01-17 00:59:01.621065 | orchestrator | Saturday 17 January 2026 00:57:23 +0000 (0:00:03.379) 0:09:55.929 ****** 2026-01-17 00:59:01.621069 | orchestrator | included: /ansible/roles/ceph-mds/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-17 00:59:01.621073 | orchestrator | 2026-01-17 00:59:01.621077 | orchestrator | TASK [ceph-mds : Create bootstrap-mds and mds directories] ********************* 2026-01-17 00:59:01.621080 | orchestrator | Saturday 17 January 2026 00:57:23 +0000 (0:00:00.561) 0:09:56.490 ****** 2026-01-17 00:59:01.621084 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mds/) 2026-01-17 00:59:01.621088 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mds/) 2026-01-17 00:59:01.621091 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mds/) 2026-01-17 00:59:01.621095 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mds/ceph-testbed-node-3) 2026-01-17 00:59:01.621099 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mds/ceph-testbed-node-4) 2026-01-17 00:59:01.621103 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mds/ceph-testbed-node-5) 2026-01-17 00:59:01.621106 | orchestrator | 2026-01-17 00:59:01.621110 | orchestrator | TASK [ceph-mds : Get keys from monitors] *************************************** 2026-01-17 00:59:01.621114 | orchestrator | Saturday 17 January 2026 00:57:25 +0000 (0:00:01.115) 0:09:57.606 ****** 2026-01-17 00:59:01.621117 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-17 00:59:01.621121 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-01-17 00:59:01.621125 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2026-01-17 00:59:01.621129 | orchestrator | 2026-01-17 00:59:01.621132 | orchestrator | TASK [ceph-mds : Copy ceph key(s) if needed] *********************************** 2026-01-17 00:59:01.621136 | orchestrator | Saturday 17 January 2026 00:57:27 +0000 (0:00:02.276) 0:09:59.882 ****** 2026-01-17 00:59:01.621140 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-01-17 00:59:01.621144 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-01-17 00:59:01.621147 | orchestrator | changed: [testbed-node-3] 2026-01-17 00:59:01.621151 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-01-17 00:59:01.621155 | orchestrator | skipping: [testbed-node-4] => (item=None)  2026-01-17 00:59:01.621158 | orchestrator | changed: [testbed-node-4] 2026-01-17 00:59:01.621162 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-01-17 00:59:01.621166 | orchestrator | skipping: [testbed-node-5] => (item=None)  2026-01-17 00:59:01.621170 | orchestrator | changed: [testbed-node-5] 2026-01-17 00:59:01.621173 | orchestrator | 2026-01-17 00:59:01.621177 | orchestrator | TASK [ceph-mds : Create mds keyring] ******************************************* 2026-01-17 00:59:01.621184 | orchestrator | Saturday 17 January 2026 00:57:29 +0000 (0:00:01.694) 0:10:01.576 ****** 2026-01-17 00:59:01.621190 | orchestrator | changed: [testbed-node-3] 2026-01-17 00:59:01.621196 | orchestrator | changed: [testbed-node-4] 2026-01-17 00:59:01.621202 | orchestrator | changed: [testbed-node-5] 2026-01-17 00:59:01.621208 | orchestrator | 2026-01-17 00:59:01.621214 | orchestrator | TASK [ceph-mds : Non_containerized.yml] **************************************** 2026-01-17 00:59:01.621220 | orchestrator | Saturday 17 January 2026 00:57:31 +0000 (0:00:02.724) 0:10:04.301 ****** 2026-01-17 00:59:01.621226 | orchestrator | skipping: [testbed-node-3] 2026-01-17 00:59:01.621232 | orchestrator | skipping: [testbed-node-4] 2026-01-17 00:59:01.621239 | orchestrator | skipping: [testbed-node-5] 2026-01-17 00:59:01.621246 | orchestrator | 2026-01-17 00:59:01.621252 | orchestrator | TASK [ceph-mds : Containerized.yml] ******************************************** 2026-01-17 00:59:01.621259 | orchestrator | Saturday 17 January 2026 00:57:32 +0000 (0:00:00.437) 0:10:04.738 ****** 2026-01-17 00:59:01.621266 | orchestrator | included: /ansible/roles/ceph-mds/tasks/containerized.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-17 00:59:01.621270 | orchestrator | 2026-01-17 00:59:01.621274 | orchestrator | TASK [ceph-mds : Include_tasks systemd.yml] ************************************ 2026-01-17 00:59:01.621281 | orchestrator | Saturday 17 January 2026 00:57:33 +0000 (0:00:01.330) 0:10:06.069 ****** 2026-01-17 00:59:01.621285 | orchestrator | included: /ansible/roles/ceph-mds/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-17 00:59:01.621288 | orchestrator | 2026-01-17 00:59:01.621292 | orchestrator | TASK [ceph-mds : Generate systemd unit file] *********************************** 2026-01-17 00:59:01.621296 | orchestrator | Saturday 17 January 2026 00:57:34 +0000 (0:00:00.875) 0:10:06.945 ****** 2026-01-17 00:59:01.621299 | orchestrator | changed: [testbed-node-3] 2026-01-17 00:59:01.621303 | orchestrator | changed: [testbed-node-4] 2026-01-17 00:59:01.621307 | orchestrator | changed: [testbed-node-5] 2026-01-17 00:59:01.621310 | orchestrator | 2026-01-17 00:59:01.621314 | orchestrator | TASK [ceph-mds : Generate systemd ceph-mds target file] ************************ 2026-01-17 00:59:01.621318 | orchestrator | Saturday 17 January 2026 00:57:35 +0000 (0:00:01.388) 0:10:08.333 ****** 2026-01-17 00:59:01.621322 | orchestrator | changed: [testbed-node-3] 2026-01-17 00:59:01.621325 | orchestrator | changed: [testbed-node-4] 2026-01-17 00:59:01.621329 | orchestrator | changed: [testbed-node-5] 2026-01-17 00:59:01.621333 | orchestrator | 2026-01-17 00:59:01.621336 | orchestrator | TASK [ceph-mds : Enable ceph-mds.target] *************************************** 2026-01-17 00:59:01.621340 | orchestrator | Saturday 17 January 2026 00:57:37 +0000 (0:00:01.466) 0:10:09.800 ****** 2026-01-17 00:59:01.621344 | orchestrator | changed: [testbed-node-3] 2026-01-17 00:59:01.621347 | orchestrator | changed: [testbed-node-4] 2026-01-17 00:59:01.621351 | orchestrator | changed: [testbed-node-5] 2026-01-17 00:59:01.621355 | orchestrator | 2026-01-17 00:59:01.621359 | orchestrator | TASK [ceph-mds : Systemd start mds container] ********************************** 2026-01-17 00:59:01.621366 | orchestrator | Saturday 17 January 2026 00:57:39 +0000 (0:00:01.864) 0:10:11.664 ****** 2026-01-17 00:59:01.621370 | orchestrator | changed: [testbed-node-3] 2026-01-17 00:59:01.621373 | orchestrator | changed: [testbed-node-4] 2026-01-17 00:59:01.621377 | orchestrator | changed: [testbed-node-5] 2026-01-17 00:59:01.621381 | orchestrator | 2026-01-17 00:59:01.621385 | orchestrator | TASK [ceph-mds : Wait for mds socket to exist] ********************************* 2026-01-17 00:59:01.621388 | orchestrator | Saturday 17 January 2026 00:57:41 +0000 (0:00:02.009) 0:10:13.673 ****** 2026-01-17 00:59:01.621392 | orchestrator | ok: [testbed-node-3] 2026-01-17 00:59:01.621396 | orchestrator | ok: [testbed-node-4] 2026-01-17 00:59:01.621399 | orchestrator | ok: [testbed-node-5] 2026-01-17 00:59:01.621403 | orchestrator | 2026-01-17 00:59:01.621407 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-01-17 00:59:01.621411 | orchestrator | Saturday 17 January 2026 00:57:42 +0000 (0:00:01.442) 0:10:15.116 ****** 2026-01-17 00:59:01.621414 | orchestrator | changed: [testbed-node-3] 2026-01-17 00:59:01.621418 | orchestrator | changed: [testbed-node-4] 2026-01-17 00:59:01.621422 | orchestrator | changed: [testbed-node-5] 2026-01-17 00:59:01.621425 | orchestrator | 2026-01-17 00:59:01.621429 | orchestrator | RUNNING HANDLER [ceph-handler : Mdss handler] ********************************** 2026-01-17 00:59:01.621433 | orchestrator | Saturday 17 January 2026 00:57:43 +0000 (0:00:00.652) 0:10:15.768 ****** 2026-01-17 00:59:01.621436 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mdss.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-17 00:59:01.621440 | orchestrator | 2026-01-17 00:59:01.621444 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called before restart] ******** 2026-01-17 00:59:01.621448 | orchestrator | Saturday 17 January 2026 00:57:44 +0000 (0:00:00.786) 0:10:16.555 ****** 2026-01-17 00:59:01.621451 | orchestrator | ok: [testbed-node-3] 2026-01-17 00:59:01.621455 | orchestrator | ok: [testbed-node-4] 2026-01-17 00:59:01.621459 | orchestrator | ok: [testbed-node-5] 2026-01-17 00:59:01.621462 | orchestrator | 2026-01-17 00:59:01.621466 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mds restart script] *********************** 2026-01-17 00:59:01.621470 | orchestrator | Saturday 17 January 2026 00:57:44 +0000 (0:00:00.347) 0:10:16.902 ****** 2026-01-17 00:59:01.621474 | orchestrator | changed: [testbed-node-4] 2026-01-17 00:59:01.621480 | orchestrator | changed: [testbed-node-3] 2026-01-17 00:59:01.621484 | orchestrator | changed: [testbed-node-5] 2026-01-17 00:59:01.621488 | orchestrator | 2026-01-17 00:59:01.621491 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mds daemon(s)] ******************** 2026-01-17 00:59:01.621495 | orchestrator | Saturday 17 January 2026 00:57:45 +0000 (0:00:01.305) 0:10:18.208 ****** 2026-01-17 00:59:01.621499 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-01-17 00:59:01.621502 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-01-17 00:59:01.621506 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-01-17 00:59:01.621510 | orchestrator | skipping: [testbed-node-3] 2026-01-17 00:59:01.621513 | orchestrator | 2026-01-17 00:59:01.621517 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called after restart] ********* 2026-01-17 00:59:01.621521 | orchestrator | Saturday 17 January 2026 00:57:46 +0000 (0:00:00.975) 0:10:19.184 ****** 2026-01-17 00:59:01.621527 | orchestrator | ok: [testbed-node-3] 2026-01-17 00:59:01.621533 | orchestrator | ok: [testbed-node-4] 2026-01-17 00:59:01.621539 | orchestrator | ok: [testbed-node-5] 2026-01-17 00:59:01.621546 | orchestrator | 2026-01-17 00:59:01.621552 | orchestrator | PLAY [Apply role ceph-rgw] ***************************************************** 2026-01-17 00:59:01.621558 | orchestrator | 2026-01-17 00:59:01.621564 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-01-17 00:59:01.621570 | orchestrator | Saturday 17 January 2026 00:57:47 +0000 (0:00:00.864) 0:10:20.048 ****** 2026-01-17 00:59:01.621576 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-17 00:59:01.621583 | orchestrator | 2026-01-17 00:59:01.621589 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-01-17 00:59:01.621596 | orchestrator | Saturday 17 January 2026 00:57:48 +0000 (0:00:00.526) 0:10:20.575 ****** 2026-01-17 00:59:01.621603 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-17 00:59:01.621609 | orchestrator | 2026-01-17 00:59:01.621619 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-01-17 00:59:01.621623 | orchestrator | Saturday 17 January 2026 00:57:48 +0000 (0:00:00.743) 0:10:21.319 ****** 2026-01-17 00:59:01.621627 | orchestrator | skipping: [testbed-node-3] 2026-01-17 00:59:01.621631 | orchestrator | skipping: [testbed-node-4] 2026-01-17 00:59:01.621634 | orchestrator | skipping: [testbed-node-5] 2026-01-17 00:59:01.621638 | orchestrator | 2026-01-17 00:59:01.621642 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-01-17 00:59:01.621645 | orchestrator | Saturday 17 January 2026 00:57:49 +0000 (0:00:00.312) 0:10:21.632 ****** 2026-01-17 00:59:01.621649 | orchestrator | ok: [testbed-node-3] 2026-01-17 00:59:01.621653 | orchestrator | ok: [testbed-node-4] 2026-01-17 00:59:01.621657 | orchestrator | ok: [testbed-node-5] 2026-01-17 00:59:01.621660 | orchestrator | 2026-01-17 00:59:01.621664 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-01-17 00:59:01.621668 | orchestrator | Saturday 17 January 2026 00:57:49 +0000 (0:00:00.746) 0:10:22.378 ****** 2026-01-17 00:59:01.621672 | orchestrator | ok: [testbed-node-3] 2026-01-17 00:59:01.621675 | orchestrator | ok: [testbed-node-4] 2026-01-17 00:59:01.621679 | orchestrator | ok: [testbed-node-5] 2026-01-17 00:59:01.621683 | orchestrator | 2026-01-17 00:59:01.621686 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-01-17 00:59:01.621690 | orchestrator | Saturday 17 January 2026 00:57:50 +0000 (0:00:01.089) 0:10:23.467 ****** 2026-01-17 00:59:01.621694 | orchestrator | ok: [testbed-node-3] 2026-01-17 00:59:01.621698 | orchestrator | ok: [testbed-node-4] 2026-01-17 00:59:01.621701 | orchestrator | ok: [testbed-node-5] 2026-01-17 00:59:01.621705 | orchestrator | 2026-01-17 00:59:01.621709 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-01-17 00:59:01.621713 | orchestrator | Saturday 17 January 2026 00:57:51 +0000 (0:00:00.755) 0:10:24.223 ****** 2026-01-17 00:59:01.621720 | orchestrator | skipping: [testbed-node-3] 2026-01-17 00:59:01.621727 | orchestrator | skipping: [testbed-node-4] 2026-01-17 00:59:01.621731 | orchestrator | skipping: [testbed-node-5] 2026-01-17 00:59:01.621735 | orchestrator | 2026-01-17 00:59:01.621739 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-01-17 00:59:01.621743 | orchestrator | Saturday 17 January 2026 00:57:52 +0000 (0:00:00.349) 0:10:24.572 ****** 2026-01-17 00:59:01.621746 | orchestrator | skipping: [testbed-node-3] 2026-01-17 00:59:01.621750 | orchestrator | skipping: [testbed-node-4] 2026-01-17 00:59:01.621754 | orchestrator | skipping: [testbed-node-5] 2026-01-17 00:59:01.621757 | orchestrator | 2026-01-17 00:59:01.621761 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-01-17 00:59:01.621765 | orchestrator | Saturday 17 January 2026 00:57:52 +0000 (0:00:00.341) 0:10:24.913 ****** 2026-01-17 00:59:01.621768 | orchestrator | skipping: [testbed-node-3] 2026-01-17 00:59:01.621772 | orchestrator | skipping: [testbed-node-4] 2026-01-17 00:59:01.621776 | orchestrator | skipping: [testbed-node-5] 2026-01-17 00:59:01.621780 | orchestrator | 2026-01-17 00:59:01.621783 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-01-17 00:59:01.621787 | orchestrator | Saturday 17 January 2026 00:57:52 +0000 (0:00:00.577) 0:10:25.491 ****** 2026-01-17 00:59:01.621791 | orchestrator | ok: [testbed-node-3] 2026-01-17 00:59:01.621794 | orchestrator | ok: [testbed-node-4] 2026-01-17 00:59:01.621798 | orchestrator | ok: [testbed-node-5] 2026-01-17 00:59:01.621802 | orchestrator | 2026-01-17 00:59:01.621805 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-01-17 00:59:01.621809 | orchestrator | Saturday 17 January 2026 00:57:53 +0000 (0:00:00.743) 0:10:26.234 ****** 2026-01-17 00:59:01.621813 | orchestrator | ok: [testbed-node-3] 2026-01-17 00:59:01.621817 | orchestrator | ok: [testbed-node-4] 2026-01-17 00:59:01.621820 | orchestrator | ok: [testbed-node-5] 2026-01-17 00:59:01.621824 | orchestrator | 2026-01-17 00:59:01.621828 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-01-17 00:59:01.621832 | orchestrator | Saturday 17 January 2026 00:57:54 +0000 (0:00:00.802) 0:10:27.036 ****** 2026-01-17 00:59:01.621835 | orchestrator | skipping: [testbed-node-3] 2026-01-17 00:59:01.621839 | orchestrator | skipping: [testbed-node-4] 2026-01-17 00:59:01.621843 | orchestrator | skipping: [testbed-node-5] 2026-01-17 00:59:01.621846 | orchestrator | 2026-01-17 00:59:01.621850 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-01-17 00:59:01.621854 | orchestrator | Saturday 17 January 2026 00:57:54 +0000 (0:00:00.325) 0:10:27.362 ****** 2026-01-17 00:59:01.621857 | orchestrator | skipping: [testbed-node-3] 2026-01-17 00:59:01.621861 | orchestrator | skipping: [testbed-node-4] 2026-01-17 00:59:01.621865 | orchestrator | skipping: [testbed-node-5] 2026-01-17 00:59:01.621868 | orchestrator | 2026-01-17 00:59:01.621872 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-01-17 00:59:01.621876 | orchestrator | Saturday 17 January 2026 00:57:55 +0000 (0:00:00.602) 0:10:27.965 ****** 2026-01-17 00:59:01.621880 | orchestrator | ok: [testbed-node-3] 2026-01-17 00:59:01.621883 | orchestrator | ok: [testbed-node-4] 2026-01-17 00:59:01.621887 | orchestrator | ok: [testbed-node-5] 2026-01-17 00:59:01.621891 | orchestrator | 2026-01-17 00:59:01.621894 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-01-17 00:59:01.621898 | orchestrator | Saturday 17 January 2026 00:57:55 +0000 (0:00:00.344) 0:10:28.310 ****** 2026-01-17 00:59:01.621902 | orchestrator | ok: [testbed-node-3] 2026-01-17 00:59:01.621905 | orchestrator | ok: [testbed-node-4] 2026-01-17 00:59:01.621922 | orchestrator | ok: [testbed-node-5] 2026-01-17 00:59:01.621926 | orchestrator | 2026-01-17 00:59:01.621930 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-01-17 00:59:01.621934 | orchestrator | Saturday 17 January 2026 00:57:56 +0000 (0:00:00.392) 0:10:28.702 ****** 2026-01-17 00:59:01.621937 | orchestrator | ok: [testbed-node-3] 2026-01-17 00:59:01.621944 | orchestrator | ok: [testbed-node-4] 2026-01-17 00:59:01.621948 | orchestrator | ok: [testbed-node-5] 2026-01-17 00:59:01.621951 | orchestrator | 2026-01-17 00:59:01.621955 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-01-17 00:59:01.621959 | orchestrator | Saturday 17 January 2026 00:57:56 +0000 (0:00:00.372) 0:10:29.075 ****** 2026-01-17 00:59:01.621963 | orchestrator | skipping: [testbed-node-3] 2026-01-17 00:59:01.621966 | orchestrator | skipping: [testbed-node-4] 2026-01-17 00:59:01.621970 | orchestrator | skipping: [testbed-node-5] 2026-01-17 00:59:01.621974 | orchestrator | 2026-01-17 00:59:01.621977 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-01-17 00:59:01.621983 | orchestrator | Saturday 17 January 2026 00:57:56 +0000 (0:00:00.329) 0:10:29.405 ****** 2026-01-17 00:59:01.621987 | orchestrator | skipping: [testbed-node-3] 2026-01-17 00:59:01.621991 | orchestrator | skipping: [testbed-node-4] 2026-01-17 00:59:01.621995 | orchestrator | skipping: [testbed-node-5] 2026-01-17 00:59:01.621998 | orchestrator | 2026-01-17 00:59:01.622002 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-01-17 00:59:01.622006 | orchestrator | Saturday 17 January 2026 00:57:57 +0000 (0:00:00.665) 0:10:30.070 ****** 2026-01-17 00:59:01.622009 | orchestrator | skipping: [testbed-node-3] 2026-01-17 00:59:01.622039 | orchestrator | skipping: [testbed-node-4] 2026-01-17 00:59:01.622043 | orchestrator | skipping: [testbed-node-5] 2026-01-17 00:59:01.622046 | orchestrator | 2026-01-17 00:59:01.622050 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-01-17 00:59:01.622054 | orchestrator | Saturday 17 January 2026 00:57:57 +0000 (0:00:00.330) 0:10:30.401 ****** 2026-01-17 00:59:01.622058 | orchestrator | ok: [testbed-node-3] 2026-01-17 00:59:01.622061 | orchestrator | ok: [testbed-node-4] 2026-01-17 00:59:01.622065 | orchestrator | ok: [testbed-node-5] 2026-01-17 00:59:01.622069 | orchestrator | 2026-01-17 00:59:01.622073 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-01-17 00:59:01.622076 | orchestrator | Saturday 17 January 2026 00:57:58 +0000 (0:00:00.391) 0:10:30.792 ****** 2026-01-17 00:59:01.622080 | orchestrator | ok: [testbed-node-3] 2026-01-17 00:59:01.622084 | orchestrator | ok: [testbed-node-4] 2026-01-17 00:59:01.622088 | orchestrator | ok: [testbed-node-5] 2026-01-17 00:59:01.622091 | orchestrator | 2026-01-17 00:59:01.622095 | orchestrator | TASK [ceph-rgw : Include common.yml] ******************************************* 2026-01-17 00:59:01.622099 | orchestrator | Saturday 17 January 2026 00:57:59 +0000 (0:00:00.847) 0:10:31.640 ****** 2026-01-17 00:59:01.622106 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-17 00:59:01.622110 | orchestrator | 2026-01-17 00:59:01.622113 | orchestrator | TASK [ceph-rgw : Get keys from monitors] *************************************** 2026-01-17 00:59:01.622117 | orchestrator | Saturday 17 January 2026 00:57:59 +0000 (0:00:00.550) 0:10:32.191 ****** 2026-01-17 00:59:01.622121 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-17 00:59:01.622125 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-01-17 00:59:01.622128 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2026-01-17 00:59:01.622132 | orchestrator | 2026-01-17 00:59:01.622136 | orchestrator | TASK [ceph-rgw : Copy ceph key(s) if needed] *********************************** 2026-01-17 00:59:01.622140 | orchestrator | Saturday 17 January 2026 00:58:01 +0000 (0:00:02.192) 0:10:34.383 ****** 2026-01-17 00:59:01.622143 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-01-17 00:59:01.622147 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-01-17 00:59:01.622151 | orchestrator | changed: [testbed-node-3] 2026-01-17 00:59:01.622155 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-01-17 00:59:01.622159 | orchestrator | skipping: [testbed-node-4] => (item=None)  2026-01-17 00:59:01.622162 | orchestrator | changed: [testbed-node-4] 2026-01-17 00:59:01.622166 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-01-17 00:59:01.622170 | orchestrator | skipping: [testbed-node-5] => (item=None)  2026-01-17 00:59:01.622177 | orchestrator | changed: [testbed-node-5] 2026-01-17 00:59:01.622180 | orchestrator | 2026-01-17 00:59:01.622184 | orchestrator | TASK [ceph-rgw : Copy SSL certificate & key data to certificate path] ********** 2026-01-17 00:59:01.622188 | orchestrator | Saturday 17 January 2026 00:58:03 +0000 (0:00:01.585) 0:10:35.969 ****** 2026-01-17 00:59:01.622192 | orchestrator | skipping: [testbed-node-3] 2026-01-17 00:59:01.622195 | orchestrator | skipping: [testbed-node-4] 2026-01-17 00:59:01.622199 | orchestrator | skipping: [testbed-node-5] 2026-01-17 00:59:01.622203 | orchestrator | 2026-01-17 00:59:01.622207 | orchestrator | TASK [ceph-rgw : Include_tasks pre_requisite.yml] ****************************** 2026-01-17 00:59:01.622210 | orchestrator | Saturday 17 January 2026 00:58:03 +0000 (0:00:00.346) 0:10:36.315 ****** 2026-01-17 00:59:01.622214 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/pre_requisite.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-17 00:59:01.622218 | orchestrator | 2026-01-17 00:59:01.622222 | orchestrator | TASK [ceph-rgw : Create rados gateway directories] ***************************** 2026-01-17 00:59:01.622225 | orchestrator | Saturday 17 January 2026 00:58:04 +0000 (0:00:00.533) 0:10:36.849 ****** 2026-01-17 00:59:01.622229 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-01-17 00:59:01.622234 | orchestrator | changed: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-01-17 00:59:01.622237 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-01-17 00:59:01.622241 | orchestrator | 2026-01-17 00:59:01.622245 | orchestrator | TASK [ceph-rgw : Create rgw keyrings] ****************************************** 2026-01-17 00:59:01.622249 | orchestrator | Saturday 17 January 2026 00:58:05 +0000 (0:00:01.481) 0:10:38.330 ****** 2026-01-17 00:59:01.622252 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-17 00:59:01.622256 | orchestrator | changed: [testbed-node-5 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2026-01-17 00:59:01.622260 | orchestrator | changed: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-17 00:59:01.622264 | orchestrator | changed: [testbed-node-4 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2026-01-17 00:59:01.622269 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-17 00:59:01.622276 | orchestrator | changed: [testbed-node-3 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2026-01-17 00:59:01.622283 | orchestrator | 2026-01-17 00:59:01.622289 | orchestrator | TASK [ceph-rgw : Get keys from monitors] *************************************** 2026-01-17 00:59:01.622295 | orchestrator | Saturday 17 January 2026 00:58:10 +0000 (0:00:05.077) 0:10:43.408 ****** 2026-01-17 00:59:01.622301 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-17 00:59:01.622308 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2026-01-17 00:59:01.622314 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-17 00:59:01.622320 | orchestrator | ok: [testbed-node-4 -> {{ groups.get(mon_group_name)[0] }}] 2026-01-17 00:59:01.622326 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-17 00:59:01.622332 | orchestrator | ok: [testbed-node-5 -> {{ groups.get(mon_group_name)[0] }}] 2026-01-17 00:59:01.622339 | orchestrator | 2026-01-17 00:59:01.622345 | orchestrator | TASK [ceph-rgw : Copy ceph key(s) if needed] *********************************** 2026-01-17 00:59:01.622351 | orchestrator | Saturday 17 January 2026 00:58:12 +0000 (0:00:02.099) 0:10:45.507 ****** 2026-01-17 00:59:01.622357 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-01-17 00:59:01.622368 | orchestrator | changed: [testbed-node-3] 2026-01-17 00:59:01.622374 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-01-17 00:59:01.622380 | orchestrator | changed: [testbed-node-5] 2026-01-17 00:59:01.622386 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-01-17 00:59:01.622392 | orchestrator | changed: [testbed-node-4] 2026-01-17 00:59:01.622399 | orchestrator | 2026-01-17 00:59:01.622410 | orchestrator | TASK [ceph-rgw : Rgw pool creation tasks] ************************************** 2026-01-17 00:59:01.622417 | orchestrator | Saturday 17 January 2026 00:58:14 +0000 (0:00:01.160) 0:10:46.667 ****** 2026-01-17 00:59:01.622425 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/rgw_create_pools.yml for testbed-node-3 2026-01-17 00:59:01.622432 | orchestrator | 2026-01-17 00:59:01.622439 | orchestrator | TASK [ceph-rgw : Create ec profile] ******************************************** 2026-01-17 00:59:01.622445 | orchestrator | Saturday 17 January 2026 00:58:14 +0000 (0:00:00.217) 0:10:46.885 ****** 2026-01-17 00:59:01.622452 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-01-17 00:59:01.622459 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-01-17 00:59:01.622466 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-01-17 00:59:01.622472 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-01-17 00:59:01.622478 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-01-17 00:59:01.622482 | orchestrator | skipping: [testbed-node-3] 2026-01-17 00:59:01.622486 | orchestrator | 2026-01-17 00:59:01.622489 | orchestrator | TASK [ceph-rgw : Set crush rule] *********************************************** 2026-01-17 00:59:01.622493 | orchestrator | Saturday 17 January 2026 00:58:15 +0000 (0:00:01.175) 0:10:48.061 ****** 2026-01-17 00:59:01.622497 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-01-17 00:59:01.622501 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-01-17 00:59:01.622505 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-01-17 00:59:01.622509 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-01-17 00:59:01.622513 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-01-17 00:59:01.622516 | orchestrator | skipping: [testbed-node-3] 2026-01-17 00:59:01.622520 | orchestrator | 2026-01-17 00:59:01.622524 | orchestrator | TASK [ceph-rgw : Create rgw pools] ********************************************* 2026-01-17 00:59:01.622528 | orchestrator | Saturday 17 January 2026 00:58:16 +0000 (0:00:00.665) 0:10:48.726 ****** 2026-01-17 00:59:01.622532 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-01-17 00:59:01.622536 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-01-17 00:59:01.622540 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-01-17 00:59:01.622544 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-01-17 00:59:01.622560 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-01-17 00:59:01.622565 | orchestrator | 2026-01-17 00:59:01.622568 | orchestrator | TASK [ceph-rgw : Include_tasks openstack-keystone.yml] ************************* 2026-01-17 00:59:01.622572 | orchestrator | Saturday 17 January 2026 00:58:46 +0000 (0:00:30.487) 0:11:19.214 ****** 2026-01-17 00:59:01.622576 | orchestrator | skipping: [testbed-node-3] 2026-01-17 00:59:01.622580 | orchestrator | skipping: [testbed-node-4] 2026-01-17 00:59:01.622584 | orchestrator | skipping: [testbed-node-5] 2026-01-17 00:59:01.622587 | orchestrator | 2026-01-17 00:59:01.622591 | orchestrator | TASK [ceph-rgw : Include_tasks start_radosgw.yml] ****************************** 2026-01-17 00:59:01.622595 | orchestrator | Saturday 17 January 2026 00:58:47 +0000 (0:00:00.311) 0:11:19.526 ****** 2026-01-17 00:59:01.622599 | orchestrator | skipping: [testbed-node-3] 2026-01-17 00:59:01.622603 | orchestrator | skipping: [testbed-node-4] 2026-01-17 00:59:01.622606 | orchestrator | skipping: [testbed-node-5] 2026-01-17 00:59:01.622610 | orchestrator | 2026-01-17 00:59:01.622614 | orchestrator | TASK [ceph-rgw : Include start_docker_rgw.yml] ********************************* 2026-01-17 00:59:01.622618 | orchestrator | Saturday 17 January 2026 00:58:47 +0000 (0:00:00.320) 0:11:19.847 ****** 2026-01-17 00:59:01.622622 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/start_docker_rgw.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-17 00:59:01.622625 | orchestrator | 2026-01-17 00:59:01.622629 | orchestrator | TASK [ceph-rgw : Include_task systemd.yml] ************************************* 2026-01-17 00:59:01.622633 | orchestrator | Saturday 17 January 2026 00:58:48 +0000 (0:00:00.803) 0:11:20.650 ****** 2026-01-17 00:59:01.622640 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-17 00:59:01.622644 | orchestrator | 2026-01-17 00:59:01.622648 | orchestrator | TASK [ceph-rgw : Generate systemd unit file] *********************************** 2026-01-17 00:59:01.622651 | orchestrator | Saturday 17 January 2026 00:58:48 +0000 (0:00:00.516) 0:11:21.167 ****** 2026-01-17 00:59:01.622655 | orchestrator | changed: [testbed-node-3] 2026-01-17 00:59:01.622659 | orchestrator | changed: [testbed-node-4] 2026-01-17 00:59:01.622663 | orchestrator | changed: [testbed-node-5] 2026-01-17 00:59:01.622666 | orchestrator | 2026-01-17 00:59:01.622670 | orchestrator | TASK [ceph-rgw : Generate systemd ceph-radosgw target file] ******************** 2026-01-17 00:59:01.622674 | orchestrator | Saturday 17 January 2026 00:58:50 +0000 (0:00:01.383) 0:11:22.551 ****** 2026-01-17 00:59:01.622677 | orchestrator | changed: [testbed-node-3] 2026-01-17 00:59:01.622681 | orchestrator | changed: [testbed-node-4] 2026-01-17 00:59:01.622685 | orchestrator | changed: [testbed-node-5] 2026-01-17 00:59:01.622689 | orchestrator | 2026-01-17 00:59:01.622692 | orchestrator | TASK [ceph-rgw : Enable ceph-radosgw.target] *********************************** 2026-01-17 00:59:01.622696 | orchestrator | Saturday 17 January 2026 00:58:51 +0000 (0:00:01.576) 0:11:24.127 ****** 2026-01-17 00:59:01.622700 | orchestrator | changed: [testbed-node-3] 2026-01-17 00:59:01.622704 | orchestrator | changed: [testbed-node-4] 2026-01-17 00:59:01.622707 | orchestrator | changed: [testbed-node-5] 2026-01-17 00:59:01.622711 | orchestrator | 2026-01-17 00:59:01.622715 | orchestrator | TASK [ceph-rgw : Systemd start rgw container] ********************************** 2026-01-17 00:59:01.622719 | orchestrator | Saturday 17 January 2026 00:58:53 +0000 (0:00:01.871) 0:11:25.999 ****** 2026-01-17 00:59:01.622722 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-01-17 00:59:01.622726 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-01-17 00:59:01.622730 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-01-17 00:59:01.622734 | orchestrator | 2026-01-17 00:59:01.622737 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-01-17 00:59:01.622746 | orchestrator | Saturday 17 January 2026 00:58:56 +0000 (0:00:03.156) 0:11:29.156 ****** 2026-01-17 00:59:01.622750 | orchestrator | skipping: [testbed-node-3] 2026-01-17 00:59:01.622753 | orchestrator | skipping: [testbed-node-4] 2026-01-17 00:59:01.622757 | orchestrator | skipping: [testbed-node-5] 2026-01-17 00:59:01.622761 | orchestrator | 2026-01-17 00:59:01.622765 | orchestrator | RUNNING HANDLER [ceph-handler : Rgws handler] ********************************** 2026-01-17 00:59:01.622768 | orchestrator | Saturday 17 January 2026 00:58:56 +0000 (0:00:00.333) 0:11:29.489 ****** 2026-01-17 00:59:01.622772 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_rgws.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-17 00:59:01.622776 | orchestrator | 2026-01-17 00:59:01.622780 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called before restart] ******** 2026-01-17 00:59:01.622783 | orchestrator | Saturday 17 January 2026 00:58:57 +0000 (0:00:00.538) 0:11:30.028 ****** 2026-01-17 00:59:01.622787 | orchestrator | ok: [testbed-node-3] 2026-01-17 00:59:01.622791 | orchestrator | ok: [testbed-node-4] 2026-01-17 00:59:01.622795 | orchestrator | ok: [testbed-node-5] 2026-01-17 00:59:01.622798 | orchestrator | 2026-01-17 00:59:01.622802 | orchestrator | RUNNING HANDLER [ceph-handler : Copy rgw restart script] *********************** 2026-01-17 00:59:01.622806 | orchestrator | Saturday 17 January 2026 00:58:58 +0000 (0:00:00.610) 0:11:30.639 ****** 2026-01-17 00:59:01.622810 | orchestrator | skipping: [testbed-node-3] 2026-01-17 00:59:01.622813 | orchestrator | skipping: [testbed-node-4] 2026-01-17 00:59:01.622817 | orchestrator | skipping: [testbed-node-5] 2026-01-17 00:59:01.622821 | orchestrator | 2026-01-17 00:59:01.622824 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph rgw daemon(s)] ******************** 2026-01-17 00:59:01.622828 | orchestrator | Saturday 17 January 2026 00:58:58 +0000 (0:00:00.348) 0:11:30.987 ****** 2026-01-17 00:59:01.622832 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-01-17 00:59:01.622836 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-01-17 00:59:01.622842 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-01-17 00:59:01.622845 | orchestrator | skipping: [testbed-node-3] 2026-01-17 00:59:01.622849 | orchestrator | 2026-01-17 00:59:01.622853 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called after restart] ********* 2026-01-17 00:59:01.622857 | orchestrator | Saturday 17 January 2026 00:58:59 +0000 (0:00:00.603) 0:11:31.591 ****** 2026-01-17 00:59:01.622860 | orchestrator | ok: [testbed-node-3] 2026-01-17 00:59:01.622864 | orchestrator | ok: [testbed-node-4] 2026-01-17 00:59:01.622868 | orchestrator | ok: [testbed-node-5] 2026-01-17 00:59:01.622872 | orchestrator | 2026-01-17 00:59:01.622875 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-17 00:59:01.622879 | orchestrator | testbed-node-0 : ok=134  changed=35  unreachable=0 failed=0 skipped=125  rescued=0 ignored=0 2026-01-17 00:59:01.622884 | orchestrator | testbed-node-1 : ok=127  changed=32  unreachable=0 failed=0 skipped=120  rescued=0 ignored=0 2026-01-17 00:59:01.622888 | orchestrator | testbed-node-2 : ok=134  changed=33  unreachable=0 failed=0 skipped=119  rescued=0 ignored=0 2026-01-17 00:59:01.622891 | orchestrator | testbed-node-3 : ok=193  changed=45  unreachable=0 failed=0 skipped=162  rescued=0 ignored=0 2026-01-17 00:59:01.622895 | orchestrator | testbed-node-4 : ok=175  changed=40  unreachable=0 failed=0 skipped=123  rescued=0 ignored=0 2026-01-17 00:59:01.622902 | orchestrator | testbed-node-5 : ok=177  changed=41  unreachable=0 failed=0 skipped=121  rescued=0 ignored=0 2026-01-17 00:59:01.622906 | orchestrator | 2026-01-17 00:59:01.622923 | orchestrator | 2026-01-17 00:59:01.622930 | orchestrator | 2026-01-17 00:59:01.622937 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-17 00:59:01.622950 | orchestrator | Saturday 17 January 2026 00:58:59 +0000 (0:00:00.255) 0:11:31.846 ****** 2026-01-17 00:59:01.622956 | orchestrator | =============================================================================== 2026-01-17 00:59:01.622963 | orchestrator | ceph-container-common : Pulling Ceph container image ------------------- 49.67s 2026-01-17 00:59:01.622967 | orchestrator | ceph-osd : Use ceph-volume to create osds ------------------------------ 43.98s 2026-01-17 00:59:01.622971 | orchestrator | ceph-mgr : Wait for all mgr to be up ----------------------------------- 36.42s 2026-01-17 00:59:01.622975 | orchestrator | ceph-rgw : Create rgw pools -------------------------------------------- 30.49s 2026-01-17 00:59:01.622979 | orchestrator | ceph-mon : Waiting for the monitor(s) to form the quorum... ------------ 21.81s 2026-01-17 00:59:01.622982 | orchestrator | ceph-mon : Set cluster configs ----------------------------------------- 14.70s 2026-01-17 00:59:01.622986 | orchestrator | ceph-osd : Wait for all osd to be up ----------------------------------- 12.40s 2026-01-17 00:59:01.622990 | orchestrator | ceph-mgr : Create ceph mgr keyring(s) on a mon node -------------------- 10.21s 2026-01-17 00:59:01.622994 | orchestrator | ceph-mon : Fetch ceph initial keys -------------------------------------- 9.13s 2026-01-17 00:59:01.622997 | orchestrator | ceph-mds : Create filesystem pools -------------------------------------- 8.81s 2026-01-17 00:59:01.623001 | orchestrator | ceph-mgr : Disable ceph mgr enabled modules ----------------------------- 6.75s 2026-01-17 00:59:01.623005 | orchestrator | ceph-config : Create ceph initial directories --------------------------- 6.74s 2026-01-17 00:59:01.623009 | orchestrator | ceph-mgr : Add modules to ceph-mgr -------------------------------------- 5.35s 2026-01-17 00:59:01.623013 | orchestrator | ceph-rgw : Create rgw keyrings ------------------------------------------ 5.08s 2026-01-17 00:59:01.623016 | orchestrator | ceph-osd : Systemd start osd -------------------------------------------- 4.48s 2026-01-17 00:59:01.623020 | orchestrator | ceph-facts : Set_fact _monitor_addresses - ipv4 ------------------------- 4.42s 2026-01-17 00:59:01.623024 | orchestrator | ceph-osd : Apply operating system tuning -------------------------------- 4.21s 2026-01-17 00:59:01.623028 | orchestrator | ceph-crash : Create client.crash keyring -------------------------------- 3.78s 2026-01-17 00:59:01.623031 | orchestrator | ceph-container-common : Get ceph version -------------------------------- 3.59s 2026-01-17 00:59:01.623035 | orchestrator | ceph-crash : Start the ceph-crash service ------------------------------- 3.56s 2026-01-17 00:59:01.623039 | orchestrator | 2026-01-17 00:59:01 | INFO  | Wait 1 second(s) until the next check 2026-01-17 00:59:04.653152 | orchestrator | 2026-01-17 00:59:04 | INFO  | Task bb757c37-4951-45ee-97d6-7030351d7249 is in state STARTED 2026-01-17 00:59:04.655076 | orchestrator | 2026-01-17 00:59:04 | INFO  | Task a2bd0e71-76fc-40aa-b003-09f7646ecc3c is in state STARTED 2026-01-17 00:59:04.657233 | orchestrator | 2026-01-17 00:59:04 | INFO  | Task 6adcda3a-ea21-49a7-8993-d3a3658b387d is in state STARTED 2026-01-17 00:59:04.657290 | orchestrator | 2026-01-17 00:59:04 | INFO  | Wait 1 second(s) until the next check 2026-01-17 00:59:07.701389 | orchestrator | 2026-01-17 00:59:07 | INFO  | Task bb757c37-4951-45ee-97d6-7030351d7249 is in state STARTED 2026-01-17 00:59:07.702840 | orchestrator | 2026-01-17 00:59:07 | INFO  | Task a2bd0e71-76fc-40aa-b003-09f7646ecc3c is in state STARTED 2026-01-17 00:59:07.705591 | orchestrator | 2026-01-17 00:59:07 | INFO  | Task 6adcda3a-ea21-49a7-8993-d3a3658b387d is in state STARTED 2026-01-17 00:59:07.705949 | orchestrator | 2026-01-17 00:59:07 | INFO  | Wait 1 second(s) until the next check 2026-01-17 00:59:10.752726 | orchestrator | 2026-01-17 00:59:10 | INFO  | Task bb757c37-4951-45ee-97d6-7030351d7249 is in state STARTED 2026-01-17 00:59:10.753956 | orchestrator | 2026-01-17 00:59:10 | INFO  | Task a2bd0e71-76fc-40aa-b003-09f7646ecc3c is in state STARTED 2026-01-17 00:59:10.756091 | orchestrator | 2026-01-17 00:59:10 | INFO  | Task 6adcda3a-ea21-49a7-8993-d3a3658b387d is in state STARTED 2026-01-17 00:59:10.756158 | orchestrator | 2026-01-17 00:59:10 | INFO  | Wait 1 second(s) until the next check 2026-01-17 00:59:13.803565 | orchestrator | 2026-01-17 00:59:13 | INFO  | Task bb757c37-4951-45ee-97d6-7030351d7249 is in state STARTED 2026-01-17 00:59:13.806570 | orchestrator | 2026-01-17 00:59:13 | INFO  | Task a2bd0e71-76fc-40aa-b003-09f7646ecc3c is in state STARTED 2026-01-17 00:59:13.810225 | orchestrator | 2026-01-17 00:59:13 | INFO  | Task 6adcda3a-ea21-49a7-8993-d3a3658b387d is in state STARTED 2026-01-17 00:59:13.810550 | orchestrator | 2026-01-17 00:59:13 | INFO  | Wait 1 second(s) until the next check 2026-01-17 00:59:16.858628 | orchestrator | 2026-01-17 00:59:16 | INFO  | Task bb757c37-4951-45ee-97d6-7030351d7249 is in state STARTED 2026-01-17 00:59:16.859641 | orchestrator | 2026-01-17 00:59:16 | INFO  | Task a2bd0e71-76fc-40aa-b003-09f7646ecc3c is in state STARTED 2026-01-17 00:59:16.861785 | orchestrator | 2026-01-17 00:59:16 | INFO  | Task 6adcda3a-ea21-49a7-8993-d3a3658b387d is in state STARTED 2026-01-17 00:59:16.862645 | orchestrator | 2026-01-17 00:59:16 | INFO  | Wait 1 second(s) until the next check 2026-01-17 00:59:19.913558 | orchestrator | 2026-01-17 00:59:19 | INFO  | Task bb757c37-4951-45ee-97d6-7030351d7249 is in state STARTED 2026-01-17 00:59:19.916060 | orchestrator | 2026-01-17 00:59:19 | INFO  | Task a2bd0e71-76fc-40aa-b003-09f7646ecc3c is in state STARTED 2026-01-17 00:59:19.918157 | orchestrator | 2026-01-17 00:59:19 | INFO  | Task 6adcda3a-ea21-49a7-8993-d3a3658b387d is in state STARTED 2026-01-17 00:59:19.918226 | orchestrator | 2026-01-17 00:59:19 | INFO  | Wait 1 second(s) until the next check 2026-01-17 00:59:22.956954 | orchestrator | 2026-01-17 00:59:22 | INFO  | Task bb757c37-4951-45ee-97d6-7030351d7249 is in state STARTED 2026-01-17 00:59:22.957606 | orchestrator | 2026-01-17 00:59:22 | INFO  | Task a2bd0e71-76fc-40aa-b003-09f7646ecc3c is in state STARTED 2026-01-17 00:59:22.958598 | orchestrator | 2026-01-17 00:59:22 | INFO  | Task 6adcda3a-ea21-49a7-8993-d3a3658b387d is in state STARTED 2026-01-17 00:59:22.958743 | orchestrator | 2026-01-17 00:59:22 | INFO  | Wait 1 second(s) until the next check 2026-01-17 00:59:25.995964 | orchestrator | 2026-01-17 00:59:25 | INFO  | Task bb757c37-4951-45ee-97d6-7030351d7249 is in state STARTED 2026-01-17 00:59:26.001378 | orchestrator | 2026-01-17 00:59:26 | INFO  | Task a2bd0e71-76fc-40aa-b003-09f7646ecc3c is in state STARTED 2026-01-17 00:59:26.004237 | orchestrator | 2026-01-17 00:59:26 | INFO  | Task 6adcda3a-ea21-49a7-8993-d3a3658b387d is in state STARTED 2026-01-17 00:59:26.004316 | orchestrator | 2026-01-17 00:59:26 | INFO  | Wait 1 second(s) until the next check 2026-01-17 00:59:29.055786 | orchestrator | 2026-01-17 00:59:29 | INFO  | Task bb757c37-4951-45ee-97d6-7030351d7249 is in state STARTED 2026-01-17 00:59:29.057271 | orchestrator | 2026-01-17 00:59:29 | INFO  | Task a2bd0e71-76fc-40aa-b003-09f7646ecc3c is in state STARTED 2026-01-17 00:59:29.059548 | orchestrator | 2026-01-17 00:59:29 | INFO  | Task 6adcda3a-ea21-49a7-8993-d3a3658b387d is in state STARTED 2026-01-17 00:59:29.059610 | orchestrator | 2026-01-17 00:59:29 | INFO  | Wait 1 second(s) until the next check 2026-01-17 00:59:32.099929 | orchestrator | 2026-01-17 00:59:32 | INFO  | Task bb757c37-4951-45ee-97d6-7030351d7249 is in state STARTED 2026-01-17 00:59:32.102228 | orchestrator | 2026-01-17 00:59:32 | INFO  | Task a2bd0e71-76fc-40aa-b003-09f7646ecc3c is in state STARTED 2026-01-17 00:59:32.103894 | orchestrator | 2026-01-17 00:59:32 | INFO  | Task 6adcda3a-ea21-49a7-8993-d3a3658b387d is in state STARTED 2026-01-17 00:59:32.103956 | orchestrator | 2026-01-17 00:59:32 | INFO  | Wait 1 second(s) until the next check 2026-01-17 00:59:35.141812 | orchestrator | 2026-01-17 00:59:35 | INFO  | Task bb757c37-4951-45ee-97d6-7030351d7249 is in state STARTED 2026-01-17 00:59:35.143688 | orchestrator | 2026-01-17 00:59:35 | INFO  | Task a2bd0e71-76fc-40aa-b003-09f7646ecc3c is in state STARTED 2026-01-17 00:59:35.145131 | orchestrator | 2026-01-17 00:59:35 | INFO  | Task 6adcda3a-ea21-49a7-8993-d3a3658b387d is in state STARTED 2026-01-17 00:59:35.145803 | orchestrator | 2026-01-17 00:59:35 | INFO  | Wait 1 second(s) until the next check 2026-01-17 00:59:38.194011 | orchestrator | 2026-01-17 00:59:38 | INFO  | Task bb757c37-4951-45ee-97d6-7030351d7249 is in state STARTED 2026-01-17 00:59:38.195812 | orchestrator | 2026-01-17 00:59:38 | INFO  | Task a2bd0e71-76fc-40aa-b003-09f7646ecc3c is in state STARTED 2026-01-17 00:59:38.197685 | orchestrator | 2026-01-17 00:59:38 | INFO  | Task 6adcda3a-ea21-49a7-8993-d3a3658b387d is in state STARTED 2026-01-17 00:59:38.197728 | orchestrator | 2026-01-17 00:59:38 | INFO  | Wait 1 second(s) until the next check 2026-01-17 00:59:41.244724 | orchestrator | 2026-01-17 00:59:41 | INFO  | Task bb757c37-4951-45ee-97d6-7030351d7249 is in state STARTED 2026-01-17 00:59:41.245014 | orchestrator | 2026-01-17 00:59:41 | INFO  | Task a2bd0e71-76fc-40aa-b003-09f7646ecc3c is in state STARTED 2026-01-17 00:59:41.246008 | orchestrator | 2026-01-17 00:59:41 | INFO  | Task 6adcda3a-ea21-49a7-8993-d3a3658b387d is in state STARTED 2026-01-17 00:59:41.246101 | orchestrator | 2026-01-17 00:59:41 | INFO  | Wait 1 second(s) until the next check 2026-01-17 00:59:44.290964 | orchestrator | 2026-01-17 00:59:44 | INFO  | Task bb757c37-4951-45ee-97d6-7030351d7249 is in state STARTED 2026-01-17 00:59:44.291441 | orchestrator | 2026-01-17 00:59:44 | INFO  | Task a2bd0e71-76fc-40aa-b003-09f7646ecc3c is in state STARTED 2026-01-17 00:59:44.293186 | orchestrator | 2026-01-17 00:59:44.293263 | orchestrator | 2026-01-17 00:59:44 | INFO  | Task 6adcda3a-ea21-49a7-8993-d3a3658b387d is in state SUCCESS 2026-01-17 00:59:44.295241 | orchestrator | 2026-01-17 00:59:44.295286 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-17 00:59:44.295299 | orchestrator | 2026-01-17 00:59:44.295310 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-01-17 00:59:44.295321 | orchestrator | Saturday 17 January 2026 00:56:44 +0000 (0:00:00.266) 0:00:00.266 ****** 2026-01-17 00:59:44.295331 | orchestrator | ok: [testbed-node-0] 2026-01-17 00:59:44.295343 | orchestrator | ok: [testbed-node-1] 2026-01-17 00:59:44.295353 | orchestrator | ok: [testbed-node-2] 2026-01-17 00:59:44.295364 | orchestrator | 2026-01-17 00:59:44.295371 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-17 00:59:44.295378 | orchestrator | Saturday 17 January 2026 00:56:44 +0000 (0:00:00.291) 0:00:00.558 ****** 2026-01-17 00:59:44.295385 | orchestrator | ok: [testbed-node-0] => (item=enable_opensearch_True) 2026-01-17 00:59:44.295392 | orchestrator | ok: [testbed-node-1] => (item=enable_opensearch_True) 2026-01-17 00:59:44.295398 | orchestrator | ok: [testbed-node-2] => (item=enable_opensearch_True) 2026-01-17 00:59:44.295405 | orchestrator | 2026-01-17 00:59:44.295411 | orchestrator | PLAY [Apply role opensearch] *************************************************** 2026-01-17 00:59:44.295417 | orchestrator | 2026-01-17 00:59:44.295423 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-01-17 00:59:44.295430 | orchestrator | Saturday 17 January 2026 00:56:44 +0000 (0:00:00.363) 0:00:00.921 ****** 2026-01-17 00:59:44.295437 | orchestrator | included: /ansible/roles/opensearch/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-17 00:59:44.295443 | orchestrator | 2026-01-17 00:59:44.295449 | orchestrator | TASK [opensearch : Setting sysctl values] ************************************** 2026-01-17 00:59:44.295456 | orchestrator | Saturday 17 January 2026 00:56:45 +0000 (0:00:00.418) 0:00:01.340 ****** 2026-01-17 00:59:44.295482 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-01-17 00:59:44.295489 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-01-17 00:59:44.295495 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-01-17 00:59:44.295501 | orchestrator | 2026-01-17 00:59:44.295507 | orchestrator | TASK [opensearch : Ensuring config directories exist] ************************** 2026-01-17 00:59:44.295513 | orchestrator | Saturday 17 January 2026 00:56:47 +0000 (0:00:01.673) 0:00:03.014 ****** 2026-01-17 00:59:44.295522 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-01-17 00:59:44.295539 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-01-17 00:59:44.295557 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-01-17 00:59:44.295567 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-01-17 00:59:44.295580 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-01-17 00:59:44.295592 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-01-17 00:59:44.295600 | orchestrator | 2026-01-17 00:59:44.295607 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-01-17 00:59:44.295613 | orchestrator | Saturday 17 January 2026 00:56:48 +0000 (0:00:01.601) 0:00:04.615 ****** 2026-01-17 00:59:44.295619 | orchestrator | included: /ansible/roles/opensearch/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-17 00:59:44.295626 | orchestrator | 2026-01-17 00:59:44.295632 | orchestrator | TASK [service-cert-copy : opensearch | Copying over extra CA certificates] ***** 2026-01-17 00:59:44.295638 | orchestrator | Saturday 17 January 2026 00:56:49 +0000 (0:00:00.558) 0:00:05.174 ****** 2026-01-17 00:59:44.295651 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-01-17 00:59:44.295658 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-01-17 00:59:44.295670 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-01-17 00:59:44.295683 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-01-17 00:59:44.295695 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-01-17 00:59:44.295702 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-01-17 00:59:44.295713 | orchestrator | 2026-01-17 00:59:44.295720 | orchestrator | TASK [service-cert-copy : opensearch | Copying over backend internal TLS certificate] *** 2026-01-17 00:59:44.295726 | orchestrator | Saturday 17 January 2026 00:56:52 +0000 (0:00:03.695) 0:00:08.869 ****** 2026-01-17 00:59:44.295733 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-01-17 00:59:44.295743 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-01-17 00:59:44.295750 | orchestrator | skipping: [testbed-node-0] 2026-01-17 00:59:44.295757 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-01-17 00:59:44.295768 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-01-17 00:59:44.295780 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-01-17 00:59:44.295787 | orchestrator | skipping: [testbed-node-2] 2026-01-17 00:59:44.295797 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-01-17 00:59:44.295804 | orchestrator | skipping: [testbed-node-1] 2026-01-17 00:59:44.295810 | orchestrator | 2026-01-17 00:59:44.295817 | orchestrator | TASK [service-cert-copy : opensearch | Copying over backend internal TLS key] *** 2026-01-17 00:59:44.295823 | orchestrator | Saturday 17 January 2026 00:56:54 +0000 (0:00:01.199) 0:00:10.068 ****** 2026-01-17 00:59:44.295830 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-01-17 00:59:44.295843 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-01-17 00:59:44.295855 | orchestrator | skipping: [testbed-node-0] 2026-01-17 00:59:44.295951 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-01-17 00:59:44.295963 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-01-17 00:59:44.295971 | orchestrator | skipping: [testbed-node-1] 2026-01-17 00:59:44.295979 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-01-17 00:59:44.295993 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-01-17 00:59:44.296007 | orchestrator | skipping: [testbed-node-2] 2026-01-17 00:59:44.296014 | orchestrator | 2026-01-17 00:59:44.296020 | orchestrator | TASK [opensearch : Copying over config.json files for services] **************** 2026-01-17 00:59:44.296027 | orchestrator | Saturday 17 January 2026 00:56:54 +0000 (0:00:00.828) 0:00:10.897 ****** 2026-01-17 00:59:44.296035 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-01-17 00:59:44.296043 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-01-17 00:59:44.296053 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-01-17 00:59:44.296066 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-01-17 00:59:44.296080 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-01-17 00:59:44.296088 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-01-17 00:59:44.296096 | orchestrator | 2026-01-17 00:59:44.296103 | orchestrator | TASK [opensearch : Copying over opensearch service config file] **************** 2026-01-17 00:59:44.296110 | orchestrator | Saturday 17 January 2026 00:56:57 +0000 (0:00:02.927) 0:00:13.824 ****** 2026-01-17 00:59:44.296121 | orchestrator | changed: [testbed-node-0] 2026-01-17 00:59:44.296128 | orchestrator | changed: [testbed-node-1] 2026-01-17 00:59:44.296135 | orchestrator | changed: [testbed-node-2] 2026-01-17 00:59:44.296142 | orchestrator | 2026-01-17 00:59:44.296149 | orchestrator | TASK [opensearch : Copying over opensearch-dashboards config file] ************* 2026-01-17 00:59:44.296156 | orchestrator | Saturday 17 January 2026 00:57:00 +0000 (0:00:02.879) 0:00:16.704 ****** 2026-01-17 00:59:44.296163 | orchestrator | changed: [testbed-node-0] 2026-01-17 00:59:44.296170 | orchestrator | changed: [testbed-node-1] 2026-01-17 00:59:44.296177 | orchestrator | changed: [testbed-node-2] 2026-01-17 00:59:44.296184 | orchestrator | 2026-01-17 00:59:44.296194 | orchestrator | TASK [opensearch : Check opensearch containers] ******************************** 2026-01-17 00:59:44.296205 | orchestrator | Saturday 17 January 2026 00:57:03 +0000 (0:00:02.834) 0:00:19.539 ****** 2026-01-17 00:59:44.296227 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-01-17 00:59:44.296249 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-01-17 00:59:44.296259 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-01-17 00:59:44.296269 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-01-17 00:59:44.296287 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-01-17 00:59:44.296312 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-01-17 00:59:44.296323 | orchestrator | 2026-01-17 00:59:44.296333 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-01-17 00:59:44.296343 | orchestrator | Saturday 17 January 2026 00:57:05 +0000 (0:00:02.444) 0:00:21.983 ****** 2026-01-17 00:59:44.296352 | orchestrator | skipping: [testbed-node-0] 2026-01-17 00:59:44.296363 | orchestrator | skipping: [testbed-node-1] 2026-01-17 00:59:44.296372 | orchestrator | skipping: [testbed-node-2] 2026-01-17 00:59:44.296381 | orchestrator | 2026-01-17 00:59:44.296391 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2026-01-17 00:59:44.296402 | orchestrator | Saturday 17 January 2026 00:57:06 +0000 (0:00:00.310) 0:00:22.294 ****** 2026-01-17 00:59:44.296411 | orchestrator | 2026-01-17 00:59:44.296420 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2026-01-17 00:59:44.296430 | orchestrator | Saturday 17 January 2026 00:57:06 +0000 (0:00:00.068) 0:00:22.362 ****** 2026-01-17 00:59:44.296440 | orchestrator | 2026-01-17 00:59:44.296451 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2026-01-17 00:59:44.296461 | orchestrator | Saturday 17 January 2026 00:57:06 +0000 (0:00:00.067) 0:00:22.430 ****** 2026-01-17 00:59:44.296471 | orchestrator | 2026-01-17 00:59:44.296481 | orchestrator | RUNNING HANDLER [opensearch : Disable shard allocation] ************************ 2026-01-17 00:59:44.296492 | orchestrator | Saturday 17 January 2026 00:57:06 +0000 (0:00:00.091) 0:00:22.521 ****** 2026-01-17 00:59:44.296501 | orchestrator | skipping: [testbed-node-0] 2026-01-17 00:59:44.296513 | orchestrator | 2026-01-17 00:59:44.296519 | orchestrator | RUNNING HANDLER [opensearch : Perform a flush] ********************************* 2026-01-17 00:59:44.296525 | orchestrator | Saturday 17 January 2026 00:57:06 +0000 (0:00:00.202) 0:00:22.724 ****** 2026-01-17 00:59:44.296532 | orchestrator | skipping: [testbed-node-0] 2026-01-17 00:59:44.296538 | orchestrator | 2026-01-17 00:59:44.296544 | orchestrator | RUNNING HANDLER [opensearch : Restart opensearch container] ******************** 2026-01-17 00:59:44.296550 | orchestrator | Saturday 17 January 2026 00:57:07 +0000 (0:00:00.908) 0:00:23.633 ****** 2026-01-17 00:59:44.296556 | orchestrator | changed: [testbed-node-0] 2026-01-17 00:59:44.296563 | orchestrator | changed: [testbed-node-1] 2026-01-17 00:59:44.296569 | orchestrator | changed: [testbed-node-2] 2026-01-17 00:59:44.296581 | orchestrator | 2026-01-17 00:59:44.296587 | orchestrator | RUNNING HANDLER [opensearch : Restart opensearch-dashboards container] ********* 2026-01-17 00:59:44.296594 | orchestrator | Saturday 17 January 2026 00:58:16 +0000 (0:01:08.511) 0:01:32.145 ****** 2026-01-17 00:59:44.296600 | orchestrator | changed: [testbed-node-0] 2026-01-17 00:59:44.296606 | orchestrator | changed: [testbed-node-2] 2026-01-17 00:59:44.296612 | orchestrator | changed: [testbed-node-1] 2026-01-17 00:59:44.296618 | orchestrator | 2026-01-17 00:59:44.296624 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-01-17 00:59:44.296631 | orchestrator | Saturday 17 January 2026 00:59:26 +0000 (0:01:10.759) 0:02:42.904 ****** 2026-01-17 00:59:44.296637 | orchestrator | included: /ansible/roles/opensearch/tasks/post-config.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-17 00:59:44.296643 | orchestrator | 2026-01-17 00:59:44.296657 | orchestrator | TASK [opensearch : Wait for OpenSearch to become ready] ************************ 2026-01-17 00:59:44.296663 | orchestrator | Saturday 17 January 2026 00:59:27 +0000 (0:00:00.810) 0:02:43.715 ****** 2026-01-17 00:59:44.296669 | orchestrator | ok: [testbed-node-0] 2026-01-17 00:59:44.296676 | orchestrator | 2026-01-17 00:59:44.296682 | orchestrator | TASK [opensearch : Wait for OpenSearch cluster to become healthy] ************** 2026-01-17 00:59:44.296688 | orchestrator | Saturday 17 January 2026 00:59:30 +0000 (0:00:02.699) 0:02:46.415 ****** 2026-01-17 00:59:44.296694 | orchestrator | ok: [testbed-node-0] 2026-01-17 00:59:44.296700 | orchestrator | 2026-01-17 00:59:44.296706 | orchestrator | TASK [opensearch : Check if a log retention policy exists] ********************* 2026-01-17 00:59:44.296712 | orchestrator | Saturday 17 January 2026 00:59:32 +0000 (0:00:02.583) 0:02:48.998 ****** 2026-01-17 00:59:44.296719 | orchestrator | ok: [testbed-node-0] 2026-01-17 00:59:44.296725 | orchestrator | 2026-01-17 00:59:44.296731 | orchestrator | TASK [opensearch : Create new log retention policy] **************************** 2026-01-17 00:59:44.296737 | orchestrator | Saturday 17 January 2026 00:59:35 +0000 (0:00:02.568) 0:02:51.566 ****** 2026-01-17 00:59:44.296743 | orchestrator | changed: [testbed-node-0] 2026-01-17 00:59:44.296749 | orchestrator | 2026-01-17 00:59:44.296755 | orchestrator | TASK [opensearch : Apply retention policy to existing indices] ***************** 2026-01-17 00:59:44.296762 | orchestrator | Saturday 17 January 2026 00:59:38 +0000 (0:00:03.043) 0:02:54.610 ****** 2026-01-17 00:59:44.296768 | orchestrator | changed: [testbed-node-0] 2026-01-17 00:59:44.296774 | orchestrator | 2026-01-17 00:59:44.296780 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-17 00:59:44.296788 | orchestrator | testbed-node-0 : ok=19  changed=11  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-01-17 00:59:44.296796 | orchestrator | testbed-node-1 : ok=14  changed=9  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-01-17 00:59:44.296808 | orchestrator | testbed-node-2 : ok=14  changed=9  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-01-17 00:59:44.296814 | orchestrator | 2026-01-17 00:59:44.296821 | orchestrator | 2026-01-17 00:59:44.296827 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-17 00:59:44.296833 | orchestrator | Saturday 17 January 2026 00:59:41 +0000 (0:00:02.564) 0:02:57.175 ****** 2026-01-17 00:59:44.296839 | orchestrator | =============================================================================== 2026-01-17 00:59:44.296845 | orchestrator | opensearch : Restart opensearch-dashboards container ------------------- 70.76s 2026-01-17 00:59:44.296851 | orchestrator | opensearch : Restart opensearch container ------------------------------ 68.51s 2026-01-17 00:59:44.296878 | orchestrator | service-cert-copy : opensearch | Copying over extra CA certificates ----- 3.70s 2026-01-17 00:59:44.296885 | orchestrator | opensearch : Create new log retention policy ---------------------------- 3.04s 2026-01-17 00:59:44.296891 | orchestrator | opensearch : Copying over config.json files for services ---------------- 2.93s 2026-01-17 00:59:44.296898 | orchestrator | opensearch : Copying over opensearch service config file ---------------- 2.88s 2026-01-17 00:59:44.296912 | orchestrator | opensearch : Copying over opensearch-dashboards config file ------------- 2.83s 2026-01-17 00:59:44.296918 | orchestrator | opensearch : Wait for OpenSearch to become ready ------------------------ 2.70s 2026-01-17 00:59:44.296924 | orchestrator | opensearch : Wait for OpenSearch cluster to become healthy -------------- 2.58s 2026-01-17 00:59:44.296930 | orchestrator | opensearch : Check if a log retention policy exists --------------------- 2.57s 2026-01-17 00:59:44.296936 | orchestrator | opensearch : Apply retention policy to existing indices ----------------- 2.56s 2026-01-17 00:59:44.296942 | orchestrator | opensearch : Check opensearch containers -------------------------------- 2.44s 2026-01-17 00:59:44.296948 | orchestrator | opensearch : Setting sysctl values -------------------------------------- 1.67s 2026-01-17 00:59:44.296955 | orchestrator | opensearch : Ensuring config directories exist -------------------------- 1.60s 2026-01-17 00:59:44.296961 | orchestrator | service-cert-copy : opensearch | Copying over backend internal TLS certificate --- 1.20s 2026-01-17 00:59:44.296967 | orchestrator | opensearch : Perform a flush -------------------------------------------- 0.91s 2026-01-17 00:59:44.296973 | orchestrator | service-cert-copy : opensearch | Copying over backend internal TLS key --- 0.83s 2026-01-17 00:59:44.296979 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.81s 2026-01-17 00:59:44.296985 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.56s 2026-01-17 00:59:44.296991 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.42s 2026-01-17 00:59:44.296998 | orchestrator | 2026-01-17 00:59:44 | INFO  | Wait 1 second(s) until the next check 2026-01-17 00:59:47.341754 | orchestrator | 2026-01-17 00:59:47 | INFO  | Task fecfcce5-dc9e-433c-b21d-55e70e0848b6 is in state STARTED 2026-01-17 00:59:47.342998 | orchestrator | 2026-01-17 00:59:47 | INFO  | Task bb757c37-4951-45ee-97d6-7030351d7249 is in state STARTED 2026-01-17 00:59:47.346649 | orchestrator | 2026-01-17 00:59:47 | INFO  | Task a2bd0e71-76fc-40aa-b003-09f7646ecc3c is in state SUCCESS 2026-01-17 00:59:47.348893 | orchestrator | 2026-01-17 00:59:47.348937 | orchestrator | 2026-01-17 00:59:47.348944 | orchestrator | PLAY [Set kolla_action_mariadb] ************************************************ 2026-01-17 00:59:47.348949 | orchestrator | 2026-01-17 00:59:47.348953 | orchestrator | TASK [Inform the user about the following task] ******************************** 2026-01-17 00:59:47.348969 | orchestrator | Saturday 17 January 2026 00:56:44 +0000 (0:00:00.097) 0:00:00.097 ****** 2026-01-17 00:59:47.348973 | orchestrator | ok: [localhost] => { 2026-01-17 00:59:47.348978 | orchestrator |  "msg": "The task 'Check MariaDB service' fails if the MariaDB service has not yet been deployed. This is fine." 2026-01-17 00:59:47.348983 | orchestrator | } 2026-01-17 00:59:47.348988 | orchestrator | 2026-01-17 00:59:47.348991 | orchestrator | TASK [Check MariaDB service] *************************************************** 2026-01-17 00:59:47.348995 | orchestrator | Saturday 17 January 2026 00:56:44 +0000 (0:00:00.047) 0:00:00.145 ****** 2026-01-17 00:59:47.349039 | orchestrator | fatal: [localhost]: FAILED! => {"changed": false, "elapsed": 2, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.9:3306"} 2026-01-17 00:59:47.349046 | orchestrator | ...ignoring 2026-01-17 00:59:47.349051 | orchestrator | 2026-01-17 00:59:47.349055 | orchestrator | TASK [Set kolla_action_mariadb = upgrade if MariaDB is already running] ******** 2026-01-17 00:59:47.349146 | orchestrator | Saturday 17 January 2026 00:56:47 +0000 (0:00:02.857) 0:00:03.003 ****** 2026-01-17 00:59:47.349150 | orchestrator | skipping: [localhost] 2026-01-17 00:59:47.349154 | orchestrator | 2026-01-17 00:59:47.349158 | orchestrator | TASK [Set kolla_action_mariadb = kolla_action_ng] ****************************** 2026-01-17 00:59:47.349162 | orchestrator | Saturday 17 January 2026 00:56:47 +0000 (0:00:00.044) 0:00:03.047 ****** 2026-01-17 00:59:47.349166 | orchestrator | ok: [localhost] 2026-01-17 00:59:47.349170 | orchestrator | 2026-01-17 00:59:47.349173 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-17 00:59:47.349191 | orchestrator | 2026-01-17 00:59:47.349196 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-01-17 00:59:47.349199 | orchestrator | Saturday 17 January 2026 00:56:47 +0000 (0:00:00.113) 0:00:03.161 ****** 2026-01-17 00:59:47.349203 | orchestrator | ok: [testbed-node-0] 2026-01-17 00:59:47.349207 | orchestrator | ok: [testbed-node-1] 2026-01-17 00:59:47.349211 | orchestrator | ok: [testbed-node-2] 2026-01-17 00:59:47.349214 | orchestrator | 2026-01-17 00:59:47.349218 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-17 00:59:47.349222 | orchestrator | Saturday 17 January 2026 00:56:47 +0000 (0:00:00.283) 0:00:03.444 ****** 2026-01-17 00:59:47.349226 | orchestrator | ok: [testbed-node-0] => (item=enable_mariadb_True) 2026-01-17 00:59:47.349230 | orchestrator | ok: [testbed-node-1] => (item=enable_mariadb_True) 2026-01-17 00:59:47.349234 | orchestrator | ok: [testbed-node-2] => (item=enable_mariadb_True) 2026-01-17 00:59:47.349238 | orchestrator | 2026-01-17 00:59:47.349242 | orchestrator | PLAY [Apply role mariadb] ****************************************************** 2026-01-17 00:59:47.349247 | orchestrator | 2026-01-17 00:59:47.349291 | orchestrator | TASK [mariadb : Group MariaDB hosts based on shards] *************************** 2026-01-17 00:59:47.349298 | orchestrator | Saturday 17 January 2026 00:56:48 +0000 (0:00:00.493) 0:00:03.938 ****** 2026-01-17 00:59:47.349305 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-01-17 00:59:47.349311 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2026-01-17 00:59:47.349317 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2026-01-17 00:59:47.349323 | orchestrator | 2026-01-17 00:59:47.349329 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-01-17 00:59:47.349335 | orchestrator | Saturday 17 January 2026 00:56:48 +0000 (0:00:00.347) 0:00:04.285 ****** 2026-01-17 00:59:47.349341 | orchestrator | included: /ansible/roles/mariadb/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-17 00:59:47.349347 | orchestrator | 2026-01-17 00:59:47.349353 | orchestrator | TASK [mariadb : Ensuring config directories exist] ***************************** 2026-01-17 00:59:47.349359 | orchestrator | Saturday 17 January 2026 00:56:48 +0000 (0:00:00.526) 0:00:04.812 ****** 2026-01-17 00:59:47.349389 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-01-17 00:59:47.349403 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-01-17 00:59:47.349408 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-01-17 00:59:47.349412 | orchestrator | 2026-01-17 00:59:47.349421 | orchestrator | TASK [mariadb : Ensuring database backup config directory exists] ************** 2026-01-17 00:59:47.349425 | orchestrator | Saturday 17 January 2026 00:56:52 +0000 (0:00:03.613) 0:00:08.426 ****** 2026-01-17 00:59:47.349429 | orchestrator | skipping: [testbed-node-1] 2026-01-17 00:59:47.349433 | orchestrator | changed: [testbed-node-0] 2026-01-17 00:59:47.349437 | orchestrator | skipping: [testbed-node-2] 2026-01-17 00:59:47.349444 | orchestrator | 2026-01-17 00:59:47.349451 | orchestrator | TASK [mariadb : Copying over my.cnf for mariabackup] *************************** 2026-01-17 00:59:47.349455 | orchestrator | Saturday 17 January 2026 00:56:53 +0000 (0:00:01.026) 0:00:09.453 ****** 2026-01-17 00:59:47.349458 | orchestrator | skipping: [testbed-node-1] 2026-01-17 00:59:47.349462 | orchestrator | skipping: [testbed-node-2] 2026-01-17 00:59:47.349466 | orchestrator | changed: [testbed-node-0] 2026-01-17 00:59:47.349470 | orchestrator | 2026-01-17 00:59:47.349474 | orchestrator | TASK [mariadb : Copying over config.json files for services] ******************* 2026-01-17 00:59:47.349477 | orchestrator | Saturday 17 January 2026 00:56:55 +0000 (0:00:01.651) 0:00:11.104 ****** 2026-01-17 00:59:47.349482 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-01-17 00:59:47.349489 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-01-17 00:59:47.349499 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-01-17 00:59:47.349504 | orchestrator | 2026-01-17 00:59:47.349507 | orchestrator | TASK [mariadb : Copying over config.json files for mariabackup] **************** 2026-01-17 00:59:47.349511 | orchestrator | Saturday 17 January 2026 00:56:59 +0000 (0:00:04.263) 0:00:15.368 ****** 2026-01-17 00:59:47.349515 | orchestrator | skipping: [testbed-node-1] 2026-01-17 00:59:47.349519 | orchestrator | skipping: [testbed-node-2] 2026-01-17 00:59:47.349523 | orchestrator | changed: [testbed-node-0] 2026-01-17 00:59:47.349526 | orchestrator | 2026-01-17 00:59:47.349530 | orchestrator | TASK [mariadb : Copying over galera.cnf] *************************************** 2026-01-17 00:59:47.349534 | orchestrator | Saturday 17 January 2026 00:57:00 +0000 (0:00:01.258) 0:00:16.626 ****** 2026-01-17 00:59:47.349538 | orchestrator | changed: [testbed-node-1] 2026-01-17 00:59:47.349541 | orchestrator | changed: [testbed-node-0] 2026-01-17 00:59:47.349545 | orchestrator | changed: [testbed-node-2] 2026-01-17 00:59:47.349549 | orchestrator | 2026-01-17 00:59:47.349553 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-01-17 00:59:47.349556 | orchestrator | Saturday 17 January 2026 00:57:06 +0000 (0:00:05.510) 0:00:22.136 ****** 2026-01-17 00:59:47.349560 | orchestrator | included: /ansible/roles/mariadb/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-17 00:59:47.349564 | orchestrator | 2026-01-17 00:59:47.349568 | orchestrator | TASK [service-cert-copy : mariadb | Copying over extra CA certificates] ******** 2026-01-17 00:59:47.349572 | orchestrator | Saturday 17 January 2026 00:57:06 +0000 (0:00:00.549) 0:00:22.686 ****** 2026-01-17 00:59:47.349587 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-01-17 00:59:47.349595 | orchestrator | skipping: [testbed-node-0] 2026-01-17 00:59:47.349599 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-01-17 00:59:47.349603 | orchestrator | skipping: [testbed-node-1] 2026-01-17 00:59:47.349613 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-01-17 00:59:47.349621 | orchestrator | skipping: [testbed-node-2] 2026-01-17 00:59:47.349625 | orchestrator | 2026-01-17 00:59:47.349628 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS certificate] *** 2026-01-17 00:59:47.349632 | orchestrator | Saturday 17 January 2026 00:57:10 +0000 (0:00:03.993) 0:00:26.679 ****** 2026-01-17 00:59:47.349636 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-01-17 00:59:47.349640 | orchestrator | skipping: [testbed-node-0] 2026-01-17 00:59:47.349647 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-01-17 00:59:47.349654 | orchestrator | skipping: [testbed-node-2] 2026-01-17 00:59:47.349660 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-01-17 00:59:47.349665 | orchestrator | skipping: [testbed-node-1] 2026-01-17 00:59:47.349668 | orchestrator | 2026-01-17 00:59:47.349672 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS key] ***** 2026-01-17 00:59:47.349676 | orchestrator | Saturday 17 January 2026 00:57:12 +0000 (0:00:02.127) 0:00:28.807 ****** 2026-01-17 00:59:47.349680 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-01-17 00:59:47.349687 | orchestrator | skipping: [testbed-node-1] 2026-01-17 00:59:47.349696 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-01-17 00:59:47.349701 | orchestrator | skipping: [testbed-node-0] 2026-01-17 00:59:47.349705 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-01-17 00:59:47.349712 | orchestrator | skipping: [testbed-node-2] 2026-01-17 00:59:47.349716 | orchestrator | 2026-01-17 00:59:47.349720 | orchestrator | TASK [mariadb : Check mariadb containers] ************************************** 2026-01-17 00:59:47.349723 | orchestrator | Saturday 17 January 2026 00:57:15 +0000 (0:00:02.127) 0:00:30.934 ****** 2026-01-17 00:59:47.349733 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-01-17 00:59:47.349738 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-01-17 00:59:47.349751 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-01-17 00:59:47.349756 | orchestrator | 2026-01-17 00:59:47.349760 | orchestrator | TASK [mariadb : Create MariaDB volume] ***************************************** 2026-01-17 00:59:47.349763 | orchestrator | Saturday 17 January 2026 00:57:17 +0000 (0:00:02.500) 0:00:33.434 ****** 2026-01-17 00:59:47.349767 | orchestrator | changed: [testbed-node-0] 2026-01-17 00:59:47.349771 | orchestrator | changed: [testbed-node-1] 2026-01-17 00:59:47.349774 | orchestrator | changed: [testbed-node-2] 2026-01-17 00:59:47.349778 | orchestrator | 2026-01-17 00:59:47.349782 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB volume availability] ************* 2026-01-17 00:59:47.349786 | orchestrator | Saturday 17 January 2026 00:57:18 +0000 (0:00:00.865) 0:00:34.300 ****** 2026-01-17 00:59:47.349790 | orchestrator | ok: [testbed-node-0] 2026-01-17 00:59:47.349793 | orchestrator | ok: [testbed-node-1] 2026-01-17 00:59:47.349797 | orchestrator | ok: [testbed-node-2] 2026-01-17 00:59:47.349801 | orchestrator | 2026-01-17 00:59:47.349804 | orchestrator | TASK [mariadb : Establish whether the cluster has already existed] ************* 2026-01-17 00:59:47.349808 | orchestrator | Saturday 17 January 2026 00:57:18 +0000 (0:00:00.555) 0:00:34.855 ****** 2026-01-17 00:59:47.349812 | orchestrator | ok: [testbed-node-0] 2026-01-17 00:59:47.349815 | orchestrator | ok: [testbed-node-1] 2026-01-17 00:59:47.349819 | orchestrator | ok: [testbed-node-2] 2026-01-17 00:59:47.349823 | orchestrator | 2026-01-17 00:59:47.349827 | orchestrator | TASK [mariadb : Check MariaDB service port liveness] *************************** 2026-01-17 00:59:47.349843 | orchestrator | Saturday 17 January 2026 00:57:19 +0000 (0:00:00.320) 0:00:35.176 ****** 2026-01-17 00:59:47.349848 | orchestrator | fatal: [testbed-node-0]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.10:3306"} 2026-01-17 00:59:47.349897 | orchestrator | ...ignoring 2026-01-17 00:59:47.349902 | orchestrator | fatal: [testbed-node-1]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.11:3306"} 2026-01-17 00:59:47.349906 | orchestrator | ...ignoring 2026-01-17 00:59:47.349910 | orchestrator | fatal: [testbed-node-2]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.12:3306"} 2026-01-17 00:59:47.349913 | orchestrator | ...ignoring 2026-01-17 00:59:47.349918 | orchestrator | 2026-01-17 00:59:47.349922 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB service port liveness] *********** 2026-01-17 00:59:47.349926 | orchestrator | Saturday 17 January 2026 00:57:30 +0000 (0:00:10.967) 0:00:46.144 ****** 2026-01-17 00:59:47.349929 | orchestrator | ok: [testbed-node-0] 2026-01-17 00:59:47.349933 | orchestrator | ok: [testbed-node-1] 2026-01-17 00:59:47.349938 | orchestrator | ok: [testbed-node-2] 2026-01-17 00:59:47.349942 | orchestrator | 2026-01-17 00:59:47.349946 | orchestrator | TASK [mariadb : Fail on existing but stopped cluster] ************************** 2026-01-17 00:59:47.349951 | orchestrator | Saturday 17 January 2026 00:57:30 +0000 (0:00:00.562) 0:00:46.706 ****** 2026-01-17 00:59:47.349955 | orchestrator | skipping: [testbed-node-0] 2026-01-17 00:59:47.349959 | orchestrator | skipping: [testbed-node-1] 2026-01-17 00:59:47.349963 | orchestrator | skipping: [testbed-node-2] 2026-01-17 00:59:47.349968 | orchestrator | 2026-01-17 00:59:47.349972 | orchestrator | TASK [mariadb : Check MariaDB service WSREP sync status] *********************** 2026-01-17 00:59:47.349976 | orchestrator | Saturday 17 January 2026 00:57:31 +0000 (0:00:00.692) 0:00:47.399 ****** 2026-01-17 00:59:47.349980 | orchestrator | skipping: [testbed-node-1] 2026-01-17 00:59:47.349984 | orchestrator | skipping: [testbed-node-0] 2026-01-17 00:59:47.349989 | orchestrator | skipping: [testbed-node-2] 2026-01-17 00:59:47.349993 | orchestrator | 2026-01-17 00:59:47.349998 | orchestrator | TASK [mariadb : Extract MariaDB service WSREP sync status] ********************* 2026-01-17 00:59:47.350002 | orchestrator | Saturday 17 January 2026 00:57:32 +0000 (0:00:00.672) 0:00:48.072 ****** 2026-01-17 00:59:47.350006 | orchestrator | skipping: [testbed-node-0] 2026-01-17 00:59:47.350011 | orchestrator | skipping: [testbed-node-1] 2026-01-17 00:59:47.350049 | orchestrator | skipping: [testbed-node-2] 2026-01-17 00:59:47.350054 | orchestrator | 2026-01-17 00:59:47.350058 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB service WSREP sync status] ******* 2026-01-17 00:59:47.350063 | orchestrator | Saturday 17 January 2026 00:57:32 +0000 (0:00:00.434) 0:00:48.506 ****** 2026-01-17 00:59:47.350069 | orchestrator | ok: [testbed-node-0] 2026-01-17 00:59:47.350075 | orchestrator | ok: [testbed-node-1] 2026-01-17 00:59:47.350081 | orchestrator | ok: [testbed-node-2] 2026-01-17 00:59:47.350088 | orchestrator | 2026-01-17 00:59:47.350095 | orchestrator | TASK [mariadb : Fail when MariaDB services are not synced across the whole cluster] *** 2026-01-17 00:59:47.350101 | orchestrator | Saturday 17 January 2026 00:57:33 +0000 (0:00:00.436) 0:00:48.942 ****** 2026-01-17 00:59:47.350112 | orchestrator | skipping: [testbed-node-0] 2026-01-17 00:59:47.350119 | orchestrator | skipping: [testbed-node-1] 2026-01-17 00:59:47.350125 | orchestrator | skipping: [testbed-node-2] 2026-01-17 00:59:47.350131 | orchestrator | 2026-01-17 00:59:47.350138 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-01-17 00:59:47.350144 | orchestrator | Saturday 17 January 2026 00:57:33 +0000 (0:00:00.718) 0:00:49.661 ****** 2026-01-17 00:59:47.350150 | orchestrator | skipping: [testbed-node-1] 2026-01-17 00:59:47.350154 | orchestrator | skipping: [testbed-node-2] 2026-01-17 00:59:47.350159 | orchestrator | included: /ansible/roles/mariadb/tasks/bootstrap_cluster.yml for testbed-node-0 2026-01-17 00:59:47.350163 | orchestrator | 2026-01-17 00:59:47.350168 | orchestrator | TASK [mariadb : Running MariaDB bootstrap container] *************************** 2026-01-17 00:59:47.350177 | orchestrator | Saturday 17 January 2026 00:57:34 +0000 (0:00:00.423) 0:00:50.084 ****** 2026-01-17 00:59:47.350181 | orchestrator | changed: [testbed-node-0] 2026-01-17 00:59:47.350186 | orchestrator | 2026-01-17 00:59:47.350191 | orchestrator | TASK [mariadb : Store bootstrap host name into facts] ************************** 2026-01-17 00:59:47.350195 | orchestrator | Saturday 17 January 2026 00:57:44 +0000 (0:00:10.371) 0:01:00.456 ****** 2026-01-17 00:59:47.350199 | orchestrator | ok: [testbed-node-0] 2026-01-17 00:59:47.350204 | orchestrator | 2026-01-17 00:59:47.350208 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-01-17 00:59:47.350213 | orchestrator | Saturday 17 January 2026 00:57:44 +0000 (0:00:00.121) 0:01:00.578 ****** 2026-01-17 00:59:47.350217 | orchestrator | skipping: [testbed-node-0] 2026-01-17 00:59:47.350221 | orchestrator | skipping: [testbed-node-1] 2026-01-17 00:59:47.350225 | orchestrator | skipping: [testbed-node-2] 2026-01-17 00:59:47.350230 | orchestrator | 2026-01-17 00:59:47.350234 | orchestrator | RUNNING HANDLER [mariadb : Starting first MariaDB container] ******************* 2026-01-17 00:59:47.350238 | orchestrator | Saturday 17 January 2026 00:57:45 +0000 (0:00:01.081) 0:01:01.660 ****** 2026-01-17 00:59:47.350243 | orchestrator | changed: [testbed-node-0] 2026-01-17 00:59:47.350247 | orchestrator | 2026-01-17 00:59:47.350252 | orchestrator | RUNNING HANDLER [mariadb : Wait for first MariaDB service port liveness] ******* 2026-01-17 00:59:47.350256 | orchestrator | Saturday 17 January 2026 00:57:54 +0000 (0:00:08.523) 0:01:10.183 ****** 2026-01-17 00:59:47.350261 | orchestrator | ok: [testbed-node-0] 2026-01-17 00:59:47.350265 | orchestrator | 2026-01-17 00:59:47.350270 | orchestrator | RUNNING HANDLER [mariadb : Wait for first MariaDB service to sync WSREP] ******* 2026-01-17 00:59:47.350274 | orchestrator | Saturday 17 January 2026 00:57:55 +0000 (0:00:01.614) 0:01:11.798 ****** 2026-01-17 00:59:47.350279 | orchestrator | ok: [testbed-node-0] 2026-01-17 00:59:47.350283 | orchestrator | 2026-01-17 00:59:47.350286 | orchestrator | RUNNING HANDLER [mariadb : Ensure MariaDB is running normally on bootstrap host] *** 2026-01-17 00:59:47.350290 | orchestrator | Saturday 17 January 2026 00:57:58 +0000 (0:00:02.630) 0:01:14.429 ****** 2026-01-17 00:59:47.350294 | orchestrator | changed: [testbed-node-0] 2026-01-17 00:59:47.350298 | orchestrator | 2026-01-17 00:59:47.350302 | orchestrator | RUNNING HANDLER [mariadb : Restart MariaDB on existing cluster members] ******** 2026-01-17 00:59:47.350306 | orchestrator | Saturday 17 January 2026 00:57:58 +0000 (0:00:00.131) 0:01:14.560 ****** 2026-01-17 00:59:47.350309 | orchestrator | skipping: [testbed-node-0] 2026-01-17 00:59:47.350313 | orchestrator | skipping: [testbed-node-1] 2026-01-17 00:59:47.350317 | orchestrator | skipping: [testbed-node-2] 2026-01-17 00:59:47.350320 | orchestrator | 2026-01-17 00:59:47.350324 | orchestrator | RUNNING HANDLER [mariadb : Start MariaDB on new nodes] ************************* 2026-01-17 00:59:47.350328 | orchestrator | Saturday 17 January 2026 00:57:59 +0000 (0:00:00.343) 0:01:14.904 ****** 2026-01-17 00:59:47.350332 | orchestrator | skipping: [testbed-node-0] 2026-01-17 00:59:47.350336 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: mariadb_restart 2026-01-17 00:59:47.350339 | orchestrator | changed: [testbed-node-1] 2026-01-17 00:59:47.350343 | orchestrator | changed: [testbed-node-2] 2026-01-17 00:59:47.350347 | orchestrator | 2026-01-17 00:59:47.350350 | orchestrator | PLAY [Restart mariadb services] ************************************************ 2026-01-17 00:59:47.350354 | orchestrator | skipping: no hosts matched 2026-01-17 00:59:47.350358 | orchestrator | 2026-01-17 00:59:47.350362 | orchestrator | PLAY [Start mariadb services] ************************************************** 2026-01-17 00:59:47.350365 | orchestrator | 2026-01-17 00:59:47.350369 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2026-01-17 00:59:47.350373 | orchestrator | Saturday 17 January 2026 00:57:59 +0000 (0:00:00.599) 0:01:15.504 ****** 2026-01-17 00:59:47.350377 | orchestrator | changed: [testbed-node-1] 2026-01-17 00:59:47.350381 | orchestrator | 2026-01-17 00:59:47.350384 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2026-01-17 00:59:47.350388 | orchestrator | Saturday 17 January 2026 00:58:15 +0000 (0:00:15.543) 0:01:31.047 ****** 2026-01-17 00:59:47.350398 | orchestrator | ok: [testbed-node-1] 2026-01-17 00:59:47.350402 | orchestrator | 2026-01-17 00:59:47.350406 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2026-01-17 00:59:47.350410 | orchestrator | Saturday 17 January 2026 00:58:30 +0000 (0:00:15.611) 0:01:46.658 ****** 2026-01-17 00:59:47.350413 | orchestrator | ok: [testbed-node-1] 2026-01-17 00:59:47.350417 | orchestrator | 2026-01-17 00:59:47.350421 | orchestrator | PLAY [Start mariadb services] ************************************************** 2026-01-17 00:59:47.350425 | orchestrator | 2026-01-17 00:59:47.350428 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2026-01-17 00:59:47.350432 | orchestrator | Saturday 17 January 2026 00:58:33 +0000 (0:00:02.440) 0:01:49.098 ****** 2026-01-17 00:59:47.350436 | orchestrator | changed: [testbed-node-2] 2026-01-17 00:59:47.350440 | orchestrator | 2026-01-17 00:59:47.350443 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2026-01-17 00:59:47.350447 | orchestrator | Saturday 17 January 2026 00:58:51 +0000 (0:00:18.164) 0:02:07.263 ****** 2026-01-17 00:59:47.350451 | orchestrator | ok: [testbed-node-2] 2026-01-17 00:59:47.350455 | orchestrator | 2026-01-17 00:59:47.350461 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2026-01-17 00:59:47.350467 | orchestrator | Saturday 17 January 2026 00:59:06 +0000 (0:00:15.588) 0:02:22.852 ****** 2026-01-17 00:59:47.350472 | orchestrator | ok: [testbed-node-2] 2026-01-17 00:59:47.350477 | orchestrator | 2026-01-17 00:59:47.350483 | orchestrator | PLAY [Restart bootstrap mariadb service] *************************************** 2026-01-17 00:59:47.350488 | orchestrator | 2026-01-17 00:59:47.350529 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2026-01-17 00:59:47.350539 | orchestrator | Saturday 17 January 2026 00:59:09 +0000 (0:00:02.540) 0:02:25.392 ****** 2026-01-17 00:59:47.350545 | orchestrator | changed: [testbed-node-0] 2026-01-17 00:59:47.350551 | orchestrator | 2026-01-17 00:59:47.350561 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2026-01-17 00:59:47.350566 | orchestrator | Saturday 17 January 2026 00:59:22 +0000 (0:00:13.280) 0:02:38.673 ****** 2026-01-17 00:59:47.350570 | orchestrator | ok: [testbed-node-0] 2026-01-17 00:59:47.350575 | orchestrator | 2026-01-17 00:59:47.350580 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2026-01-17 00:59:47.350586 | orchestrator | Saturday 17 January 2026 00:59:27 +0000 (0:00:04.637) 0:02:43.310 ****** 2026-01-17 00:59:47.350593 | orchestrator | ok: [testbed-node-0] 2026-01-17 00:59:47.350599 | orchestrator | 2026-01-17 00:59:47.350605 | orchestrator | PLAY [Apply mariadb post-configuration] **************************************** 2026-01-17 00:59:47.350611 | orchestrator | 2026-01-17 00:59:47.350617 | orchestrator | TASK [Include mariadb post-deploy.yml] ***************************************** 2026-01-17 00:59:47.350621 | orchestrator | Saturday 17 January 2026 00:59:30 +0000 (0:00:02.800) 0:02:46.110 ****** 2026-01-17 00:59:47.350625 | orchestrator | included: mariadb for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-17 00:59:47.350628 | orchestrator | 2026-01-17 00:59:47.350632 | orchestrator | TASK [mariadb : Creating shard root mysql user] ******************************** 2026-01-17 00:59:47.350636 | orchestrator | Saturday 17 January 2026 00:59:30 +0000 (0:00:00.566) 0:02:46.677 ****** 2026-01-17 00:59:47.350640 | orchestrator | skipping: [testbed-node-1] 2026-01-17 00:59:47.350644 | orchestrator | skipping: [testbed-node-2] 2026-01-17 00:59:47.350648 | orchestrator | changed: [testbed-node-0] 2026-01-17 00:59:47.350651 | orchestrator | 2026-01-17 00:59:47.350655 | orchestrator | TASK [mariadb : Creating mysql monitor user] *********************************** 2026-01-17 00:59:47.350659 | orchestrator | Saturday 17 January 2026 00:59:33 +0000 (0:00:02.711) 0:02:49.389 ****** 2026-01-17 00:59:47.350663 | orchestrator | skipping: [testbed-node-1] 2026-01-17 00:59:47.350667 | orchestrator | skipping: [testbed-node-2] 2026-01-17 00:59:47.350670 | orchestrator | changed: [testbed-node-0] 2026-01-17 00:59:47.350674 | orchestrator | 2026-01-17 00:59:47.350678 | orchestrator | TASK [mariadb : Creating database backup user and setting permissions] ********* 2026-01-17 00:59:47.350687 | orchestrator | Saturday 17 January 2026 00:59:36 +0000 (0:00:02.482) 0:02:51.871 ****** 2026-01-17 00:59:47.350691 | orchestrator | skipping: [testbed-node-1] 2026-01-17 00:59:47.350695 | orchestrator | skipping: [testbed-node-2] 2026-01-17 00:59:47.350699 | orchestrator | changed: [testbed-node-0] 2026-01-17 00:59:47.350702 | orchestrator | 2026-01-17 00:59:47.350706 | orchestrator | TASK [mariadb : Granting permissions on Mariabackup database to backup user] *** 2026-01-17 00:59:47.350710 | orchestrator | Saturday 17 January 2026 00:59:38 +0000 (0:00:02.506) 0:02:54.378 ****** 2026-01-17 00:59:47.350714 | orchestrator | skipping: [testbed-node-1] 2026-01-17 00:59:47.350717 | orchestrator | skipping: [testbed-node-2] 2026-01-17 00:59:47.350721 | orchestrator | changed: [testbed-node-0] 2026-01-17 00:59:47.350725 | orchestrator | 2026-01-17 00:59:47.350729 | orchestrator | TASK [mariadb : Wait for MariaDB service to be ready through VIP] ************** 2026-01-17 00:59:47.350733 | orchestrator | Saturday 17 January 2026 00:59:40 +0000 (0:00:02.404) 0:02:56.783 ****** 2026-01-17 00:59:47.350736 | orchestrator | ok: [testbed-node-0] 2026-01-17 00:59:47.350740 | orchestrator | ok: [testbed-node-1] 2026-01-17 00:59:47.350744 | orchestrator | ok: [testbed-node-2] 2026-01-17 00:59:47.350748 | orchestrator | 2026-01-17 00:59:47.350752 | orchestrator | TASK [Include mariadb post-upgrade.yml] **************************************** 2026-01-17 00:59:47.350755 | orchestrator | Saturday 17 January 2026 00:59:44 +0000 (0:00:03.140) 0:02:59.923 ****** 2026-01-17 00:59:47.350759 | orchestrator | skipping: [testbed-node-0] 2026-01-17 00:59:47.350763 | orchestrator | skipping: [testbed-node-1] 2026-01-17 00:59:47.350767 | orchestrator | skipping: [testbed-node-2] 2026-01-17 00:59:47.350770 | orchestrator | 2026-01-17 00:59:47.350774 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-17 00:59:47.350778 | orchestrator | localhost : ok=3  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=1  2026-01-17 00:59:47.350782 | orchestrator | testbed-node-0 : ok=34  changed=16  unreachable=0 failed=0 skipped=11  rescued=0 ignored=1  2026-01-17 00:59:47.350788 | orchestrator | testbed-node-1 : ok=20  changed=7  unreachable=0 failed=0 skipped=18  rescued=0 ignored=1  2026-01-17 00:59:47.350791 | orchestrator | testbed-node-2 : ok=20  changed=7  unreachable=0 failed=0 skipped=18  rescued=0 ignored=1  2026-01-17 00:59:47.350795 | orchestrator | 2026-01-17 00:59:47.350799 | orchestrator | 2026-01-17 00:59:47.350803 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-17 00:59:47.350807 | orchestrator | Saturday 17 January 2026 00:59:44 +0000 (0:00:00.264) 0:03:00.187 ****** 2026-01-17 00:59:47.350810 | orchestrator | =============================================================================== 2026-01-17 00:59:47.350814 | orchestrator | mariadb : Restart MariaDB container ------------------------------------ 33.71s 2026-01-17 00:59:47.350818 | orchestrator | mariadb : Wait for MariaDB service port liveness ----------------------- 31.20s 2026-01-17 00:59:47.350822 | orchestrator | mariadb : Restart MariaDB container ------------------------------------ 13.28s 2026-01-17 00:59:47.350826 | orchestrator | mariadb : Check MariaDB service port liveness -------------------------- 10.97s 2026-01-17 00:59:47.350829 | orchestrator | mariadb : Running MariaDB bootstrap container -------------------------- 10.37s 2026-01-17 00:59:47.350833 | orchestrator | mariadb : Starting first MariaDB container ------------------------------ 8.52s 2026-01-17 00:59:47.350840 | orchestrator | mariadb : Copying over galera.cnf --------------------------------------- 5.51s 2026-01-17 00:59:47.350844 | orchestrator | mariadb : Wait for MariaDB service to sync WSREP ------------------------ 4.98s 2026-01-17 00:59:47.350848 | orchestrator | mariadb : Wait for MariaDB service port liveness ------------------------ 4.64s 2026-01-17 00:59:47.350894 | orchestrator | mariadb : Copying over config.json files for services ------------------- 4.26s 2026-01-17 00:59:47.350903 | orchestrator | service-cert-copy : mariadb | Copying over extra CA certificates -------- 3.99s 2026-01-17 00:59:47.350907 | orchestrator | mariadb : Ensuring config directories exist ----------------------------- 3.61s 2026-01-17 00:59:47.350910 | orchestrator | mariadb : Wait for MariaDB service to be ready through VIP -------------- 3.14s 2026-01-17 00:59:47.350914 | orchestrator | Check MariaDB service --------------------------------------------------- 2.86s 2026-01-17 00:59:47.350918 | orchestrator | mariadb : Wait for MariaDB service to sync WSREP ------------------------ 2.80s 2026-01-17 00:59:47.350922 | orchestrator | mariadb : Creating shard root mysql user -------------------------------- 2.71s 2026-01-17 00:59:47.350926 | orchestrator | mariadb : Wait for first MariaDB service to sync WSREP ------------------ 2.63s 2026-01-17 00:59:47.350930 | orchestrator | mariadb : Creating database backup user and setting permissions --------- 2.51s 2026-01-17 00:59:47.350933 | orchestrator | mariadb : Check mariadb containers -------------------------------------- 2.50s 2026-01-17 00:59:47.350937 | orchestrator | mariadb : Creating mysql monitor user ----------------------------------- 2.48s 2026-01-17 00:59:47.350941 | orchestrator | 2026-01-17 00:59:47 | INFO  | Task 72c21961-be6e-47db-9722-e81aadb3b3af is in state STARTED 2026-01-17 00:59:47.350945 | orchestrator | 2026-01-17 00:59:47 | INFO  | Wait 1 second(s) until the next check 2026-01-17 00:59:50.391358 | orchestrator | 2026-01-17 00:59:50 | INFO  | Task fecfcce5-dc9e-433c-b21d-55e70e0848b6 is in state STARTED 2026-01-17 00:59:50.392302 | orchestrator | 2026-01-17 00:59:50 | INFO  | Task bb757c37-4951-45ee-97d6-7030351d7249 is in state STARTED 2026-01-17 00:59:50.393618 | orchestrator | 2026-01-17 00:59:50 | INFO  | Task 72c21961-be6e-47db-9722-e81aadb3b3af is in state STARTED 2026-01-17 00:59:50.393674 | orchestrator | 2026-01-17 00:59:50 | INFO  | Wait 1 second(s) until the next check 2026-01-17 00:59:53.426399 | orchestrator | 2026-01-17 00:59:53 | INFO  | Task fecfcce5-dc9e-433c-b21d-55e70e0848b6 is in state STARTED 2026-01-17 00:59:53.426779 | orchestrator | 2026-01-17 00:59:53 | INFO  | Task bb757c37-4951-45ee-97d6-7030351d7249 is in state STARTED 2026-01-17 00:59:53.427935 | orchestrator | 2026-01-17 00:59:53 | INFO  | Task 72c21961-be6e-47db-9722-e81aadb3b3af is in state STARTED 2026-01-17 00:59:53.427986 | orchestrator | 2026-01-17 00:59:53 | INFO  | Wait 1 second(s) until the next check 2026-01-17 00:59:56.466090 | orchestrator | 2026-01-17 00:59:56 | INFO  | Task fecfcce5-dc9e-433c-b21d-55e70e0848b6 is in state STARTED 2026-01-17 00:59:56.467061 | orchestrator | 2026-01-17 00:59:56 | INFO  | Task bb757c37-4951-45ee-97d6-7030351d7249 is in state STARTED 2026-01-17 00:59:56.467465 | orchestrator | 2026-01-17 00:59:56 | INFO  | Task 72c21961-be6e-47db-9722-e81aadb3b3af is in state STARTED 2026-01-17 00:59:56.467499 | orchestrator | 2026-01-17 00:59:56 | INFO  | Wait 1 second(s) until the next check 2026-01-17 00:59:59.518971 | orchestrator | 2026-01-17 00:59:59 | INFO  | Task fecfcce5-dc9e-433c-b21d-55e70e0848b6 is in state STARTED 2026-01-17 00:59:59.519056 | orchestrator | 2026-01-17 00:59:59 | INFO  | Task bb757c37-4951-45ee-97d6-7030351d7249 is in state STARTED 2026-01-17 00:59:59.519067 | orchestrator | 2026-01-17 00:59:59 | INFO  | Task 72c21961-be6e-47db-9722-e81aadb3b3af is in state STARTED 2026-01-17 00:59:59.519075 | orchestrator | 2026-01-17 00:59:59 | INFO  | Wait 1 second(s) until the next check 2026-01-17 01:00:02.562383 | orchestrator | 2026-01-17 01:00:02 | INFO  | Task fecfcce5-dc9e-433c-b21d-55e70e0848b6 is in state STARTED 2026-01-17 01:00:02.563708 | orchestrator | 2026-01-17 01:00:02 | INFO  | Task bb757c37-4951-45ee-97d6-7030351d7249 is in state STARTED 2026-01-17 01:00:02.565187 | orchestrator | 2026-01-17 01:00:02 | INFO  | Task 72c21961-be6e-47db-9722-e81aadb3b3af is in state STARTED 2026-01-17 01:00:02.565277 | orchestrator | 2026-01-17 01:00:02 | INFO  | Wait 1 second(s) until the next check 2026-01-17 01:00:05.602601 | orchestrator | 2026-01-17 01:00:05 | INFO  | Task fecfcce5-dc9e-433c-b21d-55e70e0848b6 is in state STARTED 2026-01-17 01:00:05.604089 | orchestrator | 2026-01-17 01:00:05 | INFO  | Task bb757c37-4951-45ee-97d6-7030351d7249 is in state STARTED 2026-01-17 01:00:05.605581 | orchestrator | 2026-01-17 01:00:05 | INFO  | Task 72c21961-be6e-47db-9722-e81aadb3b3af is in state STARTED 2026-01-17 01:00:05.605620 | orchestrator | 2026-01-17 01:00:05 | INFO  | Wait 1 second(s) until the next check 2026-01-17 01:00:08.650810 | orchestrator | 2026-01-17 01:00:08 | INFO  | Task fecfcce5-dc9e-433c-b21d-55e70e0848b6 is in state STARTED 2026-01-17 01:00:08.651440 | orchestrator | 2026-01-17 01:00:08 | INFO  | Task bb757c37-4951-45ee-97d6-7030351d7249 is in state STARTED 2026-01-17 01:00:08.652554 | orchestrator | 2026-01-17 01:00:08 | INFO  | Task 72c21961-be6e-47db-9722-e81aadb3b3af is in state STARTED 2026-01-17 01:00:08.652592 | orchestrator | 2026-01-17 01:00:08 | INFO  | Wait 1 second(s) until the next check 2026-01-17 01:00:11.693594 | orchestrator | 2026-01-17 01:00:11 | INFO  | Task fecfcce5-dc9e-433c-b21d-55e70e0848b6 is in state STARTED 2026-01-17 01:00:11.695531 | orchestrator | 2026-01-17 01:00:11 | INFO  | Task bb757c37-4951-45ee-97d6-7030351d7249 is in state STARTED 2026-01-17 01:00:11.696312 | orchestrator | 2026-01-17 01:00:11 | INFO  | Task 72c21961-be6e-47db-9722-e81aadb3b3af is in state STARTED 2026-01-17 01:00:11.696347 | orchestrator | 2026-01-17 01:00:11 | INFO  | Wait 1 second(s) until the next check 2026-01-17 01:00:14.743029 | orchestrator | 2026-01-17 01:00:14 | INFO  | Task fecfcce5-dc9e-433c-b21d-55e70e0848b6 is in state STARTED 2026-01-17 01:00:14.743776 | orchestrator | 2026-01-17 01:00:14 | INFO  | Task bb757c37-4951-45ee-97d6-7030351d7249 is in state STARTED 2026-01-17 01:00:14.745367 | orchestrator | 2026-01-17 01:00:14 | INFO  | Task 72c21961-be6e-47db-9722-e81aadb3b3af is in state STARTED 2026-01-17 01:00:14.745418 | orchestrator | 2026-01-17 01:00:14 | INFO  | Wait 1 second(s) until the next check 2026-01-17 01:00:17.784742 | orchestrator | 2026-01-17 01:00:17 | INFO  | Task fecfcce5-dc9e-433c-b21d-55e70e0848b6 is in state STARTED 2026-01-17 01:00:17.785383 | orchestrator | 2026-01-17 01:00:17 | INFO  | Task bb757c37-4951-45ee-97d6-7030351d7249 is in state STARTED 2026-01-17 01:00:17.785424 | orchestrator | 2026-01-17 01:00:17 | INFO  | Task 72c21961-be6e-47db-9722-e81aadb3b3af is in state STARTED 2026-01-17 01:00:17.785436 | orchestrator | 2026-01-17 01:00:17 | INFO  | Wait 1 second(s) until the next check 2026-01-17 01:00:20.841903 | orchestrator | 2026-01-17 01:00:20 | INFO  | Task fecfcce5-dc9e-433c-b21d-55e70e0848b6 is in state STARTED 2026-01-17 01:00:20.844656 | orchestrator | 2026-01-17 01:00:20 | INFO  | Task bb757c37-4951-45ee-97d6-7030351d7249 is in state STARTED 2026-01-17 01:00:20.847948 | orchestrator | 2026-01-17 01:00:20 | INFO  | Task 72c21961-be6e-47db-9722-e81aadb3b3af is in state STARTED 2026-01-17 01:00:20.848023 | orchestrator | 2026-01-17 01:00:20 | INFO  | Wait 1 second(s) until the next check 2026-01-17 01:00:23.890478 | orchestrator | 2026-01-17 01:00:23 | INFO  | Task fecfcce5-dc9e-433c-b21d-55e70e0848b6 is in state STARTED 2026-01-17 01:00:23.892406 | orchestrator | 2026-01-17 01:00:23 | INFO  | Task bb757c37-4951-45ee-97d6-7030351d7249 is in state STARTED 2026-01-17 01:00:23.894109 | orchestrator | 2026-01-17 01:00:23 | INFO  | Task 72c21961-be6e-47db-9722-e81aadb3b3af is in state STARTED 2026-01-17 01:00:23.894367 | orchestrator | 2026-01-17 01:00:23 | INFO  | Wait 1 second(s) until the next check 2026-01-17 01:00:26.939833 | orchestrator | 2026-01-17 01:00:26 | INFO  | Task fecfcce5-dc9e-433c-b21d-55e70e0848b6 is in state STARTED 2026-01-17 01:00:26.942476 | orchestrator | 2026-01-17 01:00:26 | INFO  | Task bb757c37-4951-45ee-97d6-7030351d7249 is in state STARTED 2026-01-17 01:00:26.945452 | orchestrator | 2026-01-17 01:00:26 | INFO  | Task 72c21961-be6e-47db-9722-e81aadb3b3af is in state STARTED 2026-01-17 01:00:26.945581 | orchestrator | 2026-01-17 01:00:26 | INFO  | Wait 1 second(s) until the next check 2026-01-17 01:00:29.987788 | orchestrator | 2026-01-17 01:00:29 | INFO  | Task fecfcce5-dc9e-433c-b21d-55e70e0848b6 is in state STARTED 2026-01-17 01:00:29.989928 | orchestrator | 2026-01-17 01:00:29 | INFO  | Task bb757c37-4951-45ee-97d6-7030351d7249 is in state STARTED 2026-01-17 01:00:29.992932 | orchestrator | 2026-01-17 01:00:29 | INFO  | Task 72c21961-be6e-47db-9722-e81aadb3b3af is in state STARTED 2026-01-17 01:00:29.992978 | orchestrator | 2026-01-17 01:00:29 | INFO  | Wait 1 second(s) until the next check 2026-01-17 01:00:33.038327 | orchestrator | 2026-01-17 01:00:33 | INFO  | Task fecfcce5-dc9e-433c-b21d-55e70e0848b6 is in state STARTED 2026-01-17 01:00:33.040984 | orchestrator | 2026-01-17 01:00:33 | INFO  | Task bb757c37-4951-45ee-97d6-7030351d7249 is in state STARTED 2026-01-17 01:00:33.042591 | orchestrator | 2026-01-17 01:00:33 | INFO  | Task 72c21961-be6e-47db-9722-e81aadb3b3af is in state STARTED 2026-01-17 01:00:33.042645 | orchestrator | 2026-01-17 01:00:33 | INFO  | Wait 1 second(s) until the next check 2026-01-17 01:00:36.088446 | orchestrator | 2026-01-17 01:00:36 | INFO  | Task fecfcce5-dc9e-433c-b21d-55e70e0848b6 is in state STARTED 2026-01-17 01:00:36.091235 | orchestrator | 2026-01-17 01:00:36 | INFO  | Task bb757c37-4951-45ee-97d6-7030351d7249 is in state STARTED 2026-01-17 01:00:36.093694 | orchestrator | 2026-01-17 01:00:36 | INFO  | Task 72c21961-be6e-47db-9722-e81aadb3b3af is in state STARTED 2026-01-17 01:00:36.093729 | orchestrator | 2026-01-17 01:00:36 | INFO  | Wait 1 second(s) until the next check 2026-01-17 01:00:39.144185 | orchestrator | 2026-01-17 01:00:39 | INFO  | Task fecfcce5-dc9e-433c-b21d-55e70e0848b6 is in state STARTED 2026-01-17 01:00:39.145475 | orchestrator | 2026-01-17 01:00:39 | INFO  | Task bb757c37-4951-45ee-97d6-7030351d7249 is in state STARTED 2026-01-17 01:00:39.146984 | orchestrator | 2026-01-17 01:00:39 | INFO  | Task 72c21961-be6e-47db-9722-e81aadb3b3af is in state STARTED 2026-01-17 01:00:39.147114 | orchestrator | 2026-01-17 01:00:39 | INFO  | Wait 1 second(s) until the next check 2026-01-17 01:00:42.192379 | orchestrator | 2026-01-17 01:00:42 | INFO  | Task fecfcce5-dc9e-433c-b21d-55e70e0848b6 is in state STARTED 2026-01-17 01:00:42.192999 | orchestrator | 2026-01-17 01:00:42 | INFO  | Task bb757c37-4951-45ee-97d6-7030351d7249 is in state STARTED 2026-01-17 01:00:42.194641 | orchestrator | 2026-01-17 01:00:42 | INFO  | Task 72c21961-be6e-47db-9722-e81aadb3b3af is in state STARTED 2026-01-17 01:00:42.194684 | orchestrator | 2026-01-17 01:00:42 | INFO  | Wait 1 second(s) until the next check 2026-01-17 01:00:45.236746 | orchestrator | 2026-01-17 01:00:45 | INFO  | Task fecfcce5-dc9e-433c-b21d-55e70e0848b6 is in state STARTED 2026-01-17 01:00:45.237758 | orchestrator | 2026-01-17 01:00:45 | INFO  | Task bb757c37-4951-45ee-97d6-7030351d7249 is in state STARTED 2026-01-17 01:00:45.241288 | orchestrator | 2026-01-17 01:00:45 | INFO  | Task 72c21961-be6e-47db-9722-e81aadb3b3af is in state STARTED 2026-01-17 01:00:45.241366 | orchestrator | 2026-01-17 01:00:45 | INFO  | Wait 1 second(s) until the next check 2026-01-17 01:00:48.287401 | orchestrator | 2026-01-17 01:00:48 | INFO  | Task fecfcce5-dc9e-433c-b21d-55e70e0848b6 is in state STARTED 2026-01-17 01:00:48.290087 | orchestrator | 2026-01-17 01:00:48 | INFO  | Task bb757c37-4951-45ee-97d6-7030351d7249 is in state STARTED 2026-01-17 01:00:48.293651 | orchestrator | 2026-01-17 01:00:48 | INFO  | Task 72c21961-be6e-47db-9722-e81aadb3b3af is in state STARTED 2026-01-17 01:00:48.293706 | orchestrator | 2026-01-17 01:00:48 | INFO  | Wait 1 second(s) until the next check 2026-01-17 01:00:51.347149 | orchestrator | 2026-01-17 01:00:51 | INFO  | Task fecfcce5-dc9e-433c-b21d-55e70e0848b6 is in state STARTED 2026-01-17 01:00:51.350876 | orchestrator | 2026-01-17 01:00:51 | INFO  | Task bb757c37-4951-45ee-97d6-7030351d7249 is in state STARTED 2026-01-17 01:00:51.353579 | orchestrator | 2026-01-17 01:00:51 | INFO  | Task 72c21961-be6e-47db-9722-e81aadb3b3af is in state STARTED 2026-01-17 01:00:51.353653 | orchestrator | 2026-01-17 01:00:51 | INFO  | Wait 1 second(s) until the next check 2026-01-17 01:00:54.400168 | orchestrator | 2026-01-17 01:00:54 | INFO  | Task fecfcce5-dc9e-433c-b21d-55e70e0848b6 is in state STARTED 2026-01-17 01:00:54.401367 | orchestrator | 2026-01-17 01:00:54 | INFO  | Task bb757c37-4951-45ee-97d6-7030351d7249 is in state STARTED 2026-01-17 01:00:54.404603 | orchestrator | 2026-01-17 01:00:54 | INFO  | Task 72c21961-be6e-47db-9722-e81aadb3b3af is in state STARTED 2026-01-17 01:00:54.405341 | orchestrator | 2026-01-17 01:00:54 | INFO  | Wait 1 second(s) until the next check 2026-01-17 01:00:57.458680 | orchestrator | 2026-01-17 01:00:57 | INFO  | Task fecfcce5-dc9e-433c-b21d-55e70e0848b6 is in state STARTED 2026-01-17 01:00:57.459636 | orchestrator | 2026-01-17 01:00:57 | INFO  | Task bb757c37-4951-45ee-97d6-7030351d7249 is in state STARTED 2026-01-17 01:00:57.462918 | orchestrator | 2026-01-17 01:00:57 | INFO  | Task 72c21961-be6e-47db-9722-e81aadb3b3af is in state STARTED 2026-01-17 01:00:57.462992 | orchestrator | 2026-01-17 01:00:57 | INFO  | Wait 1 second(s) until the next check 2026-01-17 01:01:00.516633 | orchestrator | 2026-01-17 01:01:00 | INFO  | Task fecfcce5-dc9e-433c-b21d-55e70e0848b6 is in state STARTED 2026-01-17 01:01:00.519135 | orchestrator | 2026-01-17 01:01:00 | INFO  | Task bb757c37-4951-45ee-97d6-7030351d7249 is in state STARTED 2026-01-17 01:01:00.522680 | orchestrator | 2026-01-17 01:01:00 | INFO  | Task 72c21961-be6e-47db-9722-e81aadb3b3af is in state STARTED 2026-01-17 01:01:00.522791 | orchestrator | 2026-01-17 01:01:00 | INFO  | Wait 1 second(s) until the next check 2026-01-17 01:01:03.576399 | orchestrator | 2026-01-17 01:01:03 | INFO  | Task fecfcce5-dc9e-433c-b21d-55e70e0848b6 is in state STARTED 2026-01-17 01:01:03.579390 | orchestrator | 2026-01-17 01:01:03 | INFO  | Task bb757c37-4951-45ee-97d6-7030351d7249 is in state STARTED 2026-01-17 01:01:03.581516 | orchestrator | 2026-01-17 01:01:03 | INFO  | Task 72c21961-be6e-47db-9722-e81aadb3b3af is in state STARTED 2026-01-17 01:01:03.581576 | orchestrator | 2026-01-17 01:01:03 | INFO  | Wait 1 second(s) until the next check 2026-01-17 01:01:06.634536 | orchestrator | 2026-01-17 01:01:06 | INFO  | Task fecfcce5-dc9e-433c-b21d-55e70e0848b6 is in state STARTED 2026-01-17 01:01:06.635938 | orchestrator | 2026-01-17 01:01:06 | INFO  | Task bb757c37-4951-45ee-97d6-7030351d7249 is in state STARTED 2026-01-17 01:01:06.637681 | orchestrator | 2026-01-17 01:01:06 | INFO  | Task 72c21961-be6e-47db-9722-e81aadb3b3af is in state STARTED 2026-01-17 01:01:06.637714 | orchestrator | 2026-01-17 01:01:06 | INFO  | Wait 1 second(s) until the next check 2026-01-17 01:01:09.693175 | orchestrator | 2026-01-17 01:01:09 | INFO  | Task fecfcce5-dc9e-433c-b21d-55e70e0848b6 is in state STARTED 2026-01-17 01:01:09.695962 | orchestrator | 2026-01-17 01:01:09 | INFO  | Task bb757c37-4951-45ee-97d6-7030351d7249 is in state STARTED 2026-01-17 01:01:09.697992 | orchestrator | 2026-01-17 01:01:09 | INFO  | Task 72c21961-be6e-47db-9722-e81aadb3b3af is in state STARTED 2026-01-17 01:01:09.698184 | orchestrator | 2026-01-17 01:01:09 | INFO  | Wait 1 second(s) until the next check 2026-01-17 01:01:12.741028 | orchestrator | 2026-01-17 01:01:12 | INFO  | Task fecfcce5-dc9e-433c-b21d-55e70e0848b6 is in state STARTED 2026-01-17 01:01:12.743176 | orchestrator | 2026-01-17 01:01:12 | INFO  | Task bb757c37-4951-45ee-97d6-7030351d7249 is in state STARTED 2026-01-17 01:01:12.744133 | orchestrator | 2026-01-17 01:01:12 | INFO  | Task 72c21961-be6e-47db-9722-e81aadb3b3af is in state STARTED 2026-01-17 01:01:12.744173 | orchestrator | 2026-01-17 01:01:12 | INFO  | Wait 1 second(s) until the next check 2026-01-17 01:01:15.794098 | orchestrator | 2026-01-17 01:01:15 | INFO  | Task fecfcce5-dc9e-433c-b21d-55e70e0848b6 is in state STARTED 2026-01-17 01:01:15.795647 | orchestrator | 2026-01-17 01:01:15 | INFO  | Task bb757c37-4951-45ee-97d6-7030351d7249 is in state STARTED 2026-01-17 01:01:15.799256 | orchestrator | 2026-01-17 01:01:15 | INFO  | Task 72c21961-be6e-47db-9722-e81aadb3b3af is in state STARTED 2026-01-17 01:01:15.799832 | orchestrator | 2026-01-17 01:01:15 | INFO  | Wait 1 second(s) until the next check 2026-01-17 01:01:18.851930 | orchestrator | 2026-01-17 01:01:18 | INFO  | Task fecfcce5-dc9e-433c-b21d-55e70e0848b6 is in state STARTED 2026-01-17 01:01:18.855423 | orchestrator | 2026-01-17 01:01:18 | INFO  | Task bb757c37-4951-45ee-97d6-7030351d7249 is in state SUCCESS 2026-01-17 01:01:18.856119 | orchestrator | 2026-01-17 01:01:18.856164 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-01-17 01:01:18.856172 | orchestrator | 2.16.14 2026-01-17 01:01:18.856181 | orchestrator | 2026-01-17 01:01:18.856236 | orchestrator | PLAY [Create ceph pools] ******************************************************* 2026-01-17 01:01:18.856246 | orchestrator | 2026-01-17 01:01:18.856253 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-01-17 01:01:18.856259 | orchestrator | Saturday 17 January 2026 00:59:04 +0000 (0:00:00.600) 0:00:00.600 ****** 2026-01-17 01:01:18.856266 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-17 01:01:18.856273 | orchestrator | 2026-01-17 01:01:18.856290 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-01-17 01:01:18.856297 | orchestrator | Saturday 17 January 2026 00:59:05 +0000 (0:00:00.648) 0:00:01.249 ****** 2026-01-17 01:01:18.856304 | orchestrator | ok: [testbed-node-3] 2026-01-17 01:01:18.856312 | orchestrator | ok: [testbed-node-5] 2026-01-17 01:01:18.856319 | orchestrator | ok: [testbed-node-4] 2026-01-17 01:01:18.856325 | orchestrator | 2026-01-17 01:01:18.856332 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-01-17 01:01:18.856337 | orchestrator | Saturday 17 January 2026 00:59:05 +0000 (0:00:00.649) 0:00:01.898 ****** 2026-01-17 01:01:18.856343 | orchestrator | ok: [testbed-node-3] 2026-01-17 01:01:18.856350 | orchestrator | ok: [testbed-node-4] 2026-01-17 01:01:18.856356 | orchestrator | ok: [testbed-node-5] 2026-01-17 01:01:18.856362 | orchestrator | 2026-01-17 01:01:18.856368 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-01-17 01:01:18.856374 | orchestrator | Saturday 17 January 2026 00:59:06 +0000 (0:00:00.288) 0:00:02.186 ****** 2026-01-17 01:01:18.856380 | orchestrator | ok: [testbed-node-3] 2026-01-17 01:01:18.856387 | orchestrator | ok: [testbed-node-4] 2026-01-17 01:01:18.856393 | orchestrator | ok: [testbed-node-5] 2026-01-17 01:01:18.856399 | orchestrator | 2026-01-17 01:01:18.856405 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-01-17 01:01:18.856411 | orchestrator | Saturday 17 January 2026 00:59:06 +0000 (0:00:00.800) 0:00:02.987 ****** 2026-01-17 01:01:18.856715 | orchestrator | ok: [testbed-node-3] 2026-01-17 01:01:18.856784 | orchestrator | ok: [testbed-node-4] 2026-01-17 01:01:18.856790 | orchestrator | ok: [testbed-node-5] 2026-01-17 01:01:18.856795 | orchestrator | 2026-01-17 01:01:18.856802 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-01-17 01:01:18.856808 | orchestrator | Saturday 17 January 2026 00:59:07 +0000 (0:00:00.300) 0:00:03.287 ****** 2026-01-17 01:01:18.856814 | orchestrator | ok: [testbed-node-3] 2026-01-17 01:01:18.856821 | orchestrator | ok: [testbed-node-4] 2026-01-17 01:01:18.856827 | orchestrator | ok: [testbed-node-5] 2026-01-17 01:01:18.856833 | orchestrator | 2026-01-17 01:01:18.856839 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-01-17 01:01:18.856861 | orchestrator | Saturday 17 January 2026 00:59:07 +0000 (0:00:00.308) 0:00:03.596 ****** 2026-01-17 01:01:18.856868 | orchestrator | ok: [testbed-node-3] 2026-01-17 01:01:18.856874 | orchestrator | ok: [testbed-node-4] 2026-01-17 01:01:18.856880 | orchestrator | ok: [testbed-node-5] 2026-01-17 01:01:18.856886 | orchestrator | 2026-01-17 01:01:18.856892 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-01-17 01:01:18.856898 | orchestrator | Saturday 17 January 2026 00:59:07 +0000 (0:00:00.305) 0:00:03.901 ****** 2026-01-17 01:01:18.856905 | orchestrator | skipping: [testbed-node-3] 2026-01-17 01:01:18.856912 | orchestrator | skipping: [testbed-node-4] 2026-01-17 01:01:18.856918 | orchestrator | skipping: [testbed-node-5] 2026-01-17 01:01:18.856923 | orchestrator | 2026-01-17 01:01:18.856929 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-01-17 01:01:18.856935 | orchestrator | Saturday 17 January 2026 00:59:08 +0000 (0:00:00.554) 0:00:04.456 ****** 2026-01-17 01:01:18.856940 | orchestrator | ok: [testbed-node-3] 2026-01-17 01:01:18.856946 | orchestrator | ok: [testbed-node-4] 2026-01-17 01:01:18.856953 | orchestrator | ok: [testbed-node-5] 2026-01-17 01:01:18.856958 | orchestrator | 2026-01-17 01:01:18.856964 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-01-17 01:01:18.856970 | orchestrator | Saturday 17 January 2026 00:59:08 +0000 (0:00:00.295) 0:00:04.752 ****** 2026-01-17 01:01:18.856976 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-01-17 01:01:18.856982 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-01-17 01:01:18.856988 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-01-17 01:01:18.856993 | orchestrator | 2026-01-17 01:01:18.856999 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-01-17 01:01:18.857004 | orchestrator | Saturday 17 January 2026 00:59:09 +0000 (0:00:00.666) 0:00:05.418 ****** 2026-01-17 01:01:18.857010 | orchestrator | ok: [testbed-node-3] 2026-01-17 01:01:18.857015 | orchestrator | ok: [testbed-node-4] 2026-01-17 01:01:18.857021 | orchestrator | ok: [testbed-node-5] 2026-01-17 01:01:18.857027 | orchestrator | 2026-01-17 01:01:18.857033 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-01-17 01:01:18.857039 | orchestrator | Saturday 17 January 2026 00:59:09 +0000 (0:00:00.450) 0:00:05.868 ****** 2026-01-17 01:01:18.857045 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-01-17 01:01:18.857051 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-01-17 01:01:18.857057 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-01-17 01:01:18.857063 | orchestrator | 2026-01-17 01:01:18.857070 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-01-17 01:01:18.857076 | orchestrator | Saturday 17 January 2026 00:59:12 +0000 (0:00:02.343) 0:00:08.212 ****** 2026-01-17 01:01:18.857083 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-01-17 01:01:18.857090 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-01-17 01:01:18.857108 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-01-17 01:01:18.857114 | orchestrator | skipping: [testbed-node-3] 2026-01-17 01:01:18.857209 | orchestrator | 2026-01-17 01:01:18.857233 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-01-17 01:01:18.857240 | orchestrator | Saturday 17 January 2026 00:59:12 +0000 (0:00:00.629) 0:00:08.841 ****** 2026-01-17 01:01:18.857247 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-01-17 01:01:18.857256 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-01-17 01:01:18.857262 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-01-17 01:01:18.857270 | orchestrator | skipping: [testbed-node-3] 2026-01-17 01:01:18.857277 | orchestrator | 2026-01-17 01:01:18.857283 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-01-17 01:01:18.857289 | orchestrator | Saturday 17 January 2026 00:59:13 +0000 (0:00:00.814) 0:00:09.656 ****** 2026-01-17 01:01:18.857297 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-01-17 01:01:18.857313 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-01-17 01:01:18.857552 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-01-17 01:01:18.857573 | orchestrator | skipping: [testbed-node-3] 2026-01-17 01:01:18.857578 | orchestrator | 2026-01-17 01:01:18.857582 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-01-17 01:01:18.857586 | orchestrator | Saturday 17 January 2026 00:59:13 +0000 (0:00:00.375) 0:00:10.031 ****** 2026-01-17 01:01:18.857592 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': 'a337cfdb08d1', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-01-17 00:59:10.509840', 'end': '2026-01-17 00:59:10.548592', 'delta': '0:00:00.038752', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['a337cfdb08d1'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2026-01-17 01:01:18.857599 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': '0d7a4ed8b41c', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-01-17 00:59:11.296678', 'end': '2026-01-17 00:59:11.341876', 'delta': '0:00:00.045198', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['0d7a4ed8b41c'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2026-01-17 01:01:18.857627 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': '65bd4ee79600', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-01-17 00:59:11.894036', 'end': '2026-01-17 00:59:11.942435', 'delta': '0:00:00.048399', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['65bd4ee79600'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2026-01-17 01:01:18.857632 | orchestrator | 2026-01-17 01:01:18.857636 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-01-17 01:01:18.857640 | orchestrator | Saturday 17 January 2026 00:59:14 +0000 (0:00:00.195) 0:00:10.226 ****** 2026-01-17 01:01:18.857644 | orchestrator | ok: [testbed-node-3] 2026-01-17 01:01:18.857648 | orchestrator | ok: [testbed-node-4] 2026-01-17 01:01:18.857652 | orchestrator | ok: [testbed-node-5] 2026-01-17 01:01:18.857656 | orchestrator | 2026-01-17 01:01:18.857659 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-01-17 01:01:18.857663 | orchestrator | Saturday 17 January 2026 00:59:14 +0000 (0:00:00.428) 0:00:10.655 ****** 2026-01-17 01:01:18.857667 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] 2026-01-17 01:01:18.857671 | orchestrator | 2026-01-17 01:01:18.857675 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-01-17 01:01:18.857678 | orchestrator | Saturday 17 January 2026 00:59:16 +0000 (0:00:01.715) 0:00:12.370 ****** 2026-01-17 01:01:18.857682 | orchestrator | skipping: [testbed-node-3] 2026-01-17 01:01:18.857686 | orchestrator | skipping: [testbed-node-4] 2026-01-17 01:01:18.857690 | orchestrator | skipping: [testbed-node-5] 2026-01-17 01:01:18.857694 | orchestrator | 2026-01-17 01:01:18.857697 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-01-17 01:01:18.857701 | orchestrator | Saturday 17 January 2026 00:59:16 +0000 (0:00:00.321) 0:00:12.692 ****** 2026-01-17 01:01:18.857705 | orchestrator | skipping: [testbed-node-3] 2026-01-17 01:01:18.857709 | orchestrator | skipping: [testbed-node-4] 2026-01-17 01:01:18.857712 | orchestrator | skipping: [testbed-node-5] 2026-01-17 01:01:18.857761 | orchestrator | 2026-01-17 01:01:18.857769 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-01-17 01:01:18.857784 | orchestrator | Saturday 17 January 2026 00:59:17 +0000 (0:00:00.423) 0:00:13.116 ****** 2026-01-17 01:01:18.857793 | orchestrator | skipping: [testbed-node-3] 2026-01-17 01:01:18.857799 | orchestrator | skipping: [testbed-node-4] 2026-01-17 01:01:18.857805 | orchestrator | skipping: [testbed-node-5] 2026-01-17 01:01:18.857812 | orchestrator | 2026-01-17 01:01:18.857819 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-01-17 01:01:18.857826 | orchestrator | Saturday 17 January 2026 00:59:17 +0000 (0:00:00.497) 0:00:13.614 ****** 2026-01-17 01:01:18.857832 | orchestrator | ok: [testbed-node-3] 2026-01-17 01:01:18.857839 | orchestrator | 2026-01-17 01:01:18.857844 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-01-17 01:01:18.857848 | orchestrator | Saturday 17 January 2026 00:59:17 +0000 (0:00:00.144) 0:00:13.758 ****** 2026-01-17 01:01:18.857858 | orchestrator | skipping: [testbed-node-3] 2026-01-17 01:01:18.857862 | orchestrator | 2026-01-17 01:01:18.857866 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-01-17 01:01:18.857869 | orchestrator | Saturday 17 January 2026 00:59:17 +0000 (0:00:00.246) 0:00:14.005 ****** 2026-01-17 01:01:18.857873 | orchestrator | skipping: [testbed-node-3] 2026-01-17 01:01:18.857877 | orchestrator | skipping: [testbed-node-4] 2026-01-17 01:01:18.857881 | orchestrator | skipping: [testbed-node-5] 2026-01-17 01:01:18.857885 | orchestrator | 2026-01-17 01:01:18.857888 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-01-17 01:01:18.857892 | orchestrator | Saturday 17 January 2026 00:59:18 +0000 (0:00:00.289) 0:00:14.295 ****** 2026-01-17 01:01:18.857896 | orchestrator | skipping: [testbed-node-3] 2026-01-17 01:01:18.857900 | orchestrator | skipping: [testbed-node-4] 2026-01-17 01:01:18.857904 | orchestrator | skipping: [testbed-node-5] 2026-01-17 01:01:18.857907 | orchestrator | 2026-01-17 01:01:18.857911 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-01-17 01:01:18.857915 | orchestrator | Saturday 17 January 2026 00:59:18 +0000 (0:00:00.327) 0:00:14.622 ****** 2026-01-17 01:01:18.857919 | orchestrator | skipping: [testbed-node-3] 2026-01-17 01:01:18.857922 | orchestrator | skipping: [testbed-node-4] 2026-01-17 01:01:18.857926 | orchestrator | skipping: [testbed-node-5] 2026-01-17 01:01:18.857930 | orchestrator | 2026-01-17 01:01:18.857934 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-01-17 01:01:18.857938 | orchestrator | Saturday 17 January 2026 00:59:19 +0000 (0:00:00.508) 0:00:15.131 ****** 2026-01-17 01:01:18.857956 | orchestrator | skipping: [testbed-node-3] 2026-01-17 01:01:18.857960 | orchestrator | skipping: [testbed-node-4] 2026-01-17 01:01:18.857964 | orchestrator | skipping: [testbed-node-5] 2026-01-17 01:01:18.857968 | orchestrator | 2026-01-17 01:01:18.857971 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-01-17 01:01:18.857975 | orchestrator | Saturday 17 January 2026 00:59:19 +0000 (0:00:00.322) 0:00:15.453 ****** 2026-01-17 01:01:18.857979 | orchestrator | skipping: [testbed-node-3] 2026-01-17 01:01:18.857983 | orchestrator | skipping: [testbed-node-4] 2026-01-17 01:01:18.857986 | orchestrator | skipping: [testbed-node-5] 2026-01-17 01:01:18.857990 | orchestrator | 2026-01-17 01:01:18.857994 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-01-17 01:01:18.857998 | orchestrator | Saturday 17 January 2026 00:59:19 +0000 (0:00:00.334) 0:00:15.788 ****** 2026-01-17 01:01:18.858002 | orchestrator | skipping: [testbed-node-3] 2026-01-17 01:01:18.858005 | orchestrator | skipping: [testbed-node-4] 2026-01-17 01:01:18.858009 | orchestrator | skipping: [testbed-node-5] 2026-01-17 01:01:18.858068 | orchestrator | 2026-01-17 01:01:18.858073 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-01-17 01:01:18.858078 | orchestrator | Saturday 17 January 2026 00:59:20 +0000 (0:00:00.322) 0:00:16.110 ****** 2026-01-17 01:01:18.858081 | orchestrator | skipping: [testbed-node-3] 2026-01-17 01:01:18.858085 | orchestrator | skipping: [testbed-node-4] 2026-01-17 01:01:18.858089 | orchestrator | skipping: [testbed-node-5] 2026-01-17 01:01:18.858093 | orchestrator | 2026-01-17 01:01:18.858096 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-01-17 01:01:18.858100 | orchestrator | Saturday 17 January 2026 00:59:20 +0000 (0:00:00.534) 0:00:16.645 ****** 2026-01-17 01:01:18.858106 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--c5f49b22--d40f--5ab7--98f7--9762e23da2c0-osd--block--c5f49b22--d40f--5ab7--98f7--9762e23da2c0', 'dm-uuid-LVM-QaFsaK8PUscqv52QG7rZWQsM1ITbmCNtBg9UmwnkCU0TgTFpgJE46eQvvR1UIOjf'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-01-17 01:01:18.858115 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--2051e43b--6678--567a--85ad--b7e1187d21ae-osd--block--2051e43b--6678--567a--85ad--b7e1187d21ae', 'dm-uuid-LVM-Md7et7hVBu5ntN3bevHsnjBkleVswA1X1WLsiCL62gGz9fZiSf7sD18qnz4rPMBd'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-01-17 01:01:18.858123 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-17 01:01:18.858127 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-17 01:01:18.858131 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-17 01:01:18.858135 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-17 01:01:18.858139 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-17 01:01:18.858154 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-17 01:01:18.858159 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-17 01:01:18.858163 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-17 01:01:18.858175 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a81b0aae-ecd3-46bc-81d3-c119638f529b', 'scsi-SQEMU_QEMU_HARDDISK_a81b0aae-ecd3-46bc-81d3-c119638f529b'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a81b0aae-ecd3-46bc-81d3-c119638f529b-part1', 'scsi-SQEMU_QEMU_HARDDISK_a81b0aae-ecd3-46bc-81d3-c119638f529b-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a81b0aae-ecd3-46bc-81d3-c119638f529b-part14', 'scsi-SQEMU_QEMU_HARDDISK_a81b0aae-ecd3-46bc-81d3-c119638f529b-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a81b0aae-ecd3-46bc-81d3-c119638f529b-part15', 'scsi-SQEMU_QEMU_HARDDISK_a81b0aae-ecd3-46bc-81d3-c119638f529b-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a81b0aae-ecd3-46bc-81d3-c119638f529b-part16', 'scsi-SQEMU_QEMU_HARDDISK_a81b0aae-ecd3-46bc-81d3-c119638f529b-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-17 01:01:18.858182 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--6f2a493f--ee42--5e89--bc68--fb4f7dc1b165-osd--block--6f2a493f--ee42--5e89--bc68--fb4f7dc1b165', 'dm-uuid-LVM-nFFSrCL2nvETfTYSLcEWw2ku767Ad4TlanSeVjPGOYbtNyp2dOrEmthQag04Qlfw'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-01-17 01:01:18.858199 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'holders': ['ceph--c5f49b22--d40f--5ab7--98f7--9762e23da2c0-osd--block--c5f49b22--d40f--5ab7--98f7--9762e23da2c0'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-RMJI3b-05hW-7xpG-f9bN-7LlA-F5wA-8B2W4U', 'scsi-0QEMU_QEMU_HARDDISK_03c99a05-96d9-4471-aa9e-2837c3fbd541', 'scsi-SQEMU_QEMU_HARDDISK_03c99a05-96d9-4471-aa9e-2837c3fbd541'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-17 01:01:18.858205 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--fbc9b557--fafa--5136--b4c6--7d286dd557bb-osd--block--fbc9b557--fafa--5136--b4c6--7d286dd557bb', 'dm-uuid-LVM-tvbYC5qdW0xeFSGscnFgtuguYTc6vFsjyuP8eHblF9gDORksVycTX3WWlG9BStgP'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-01-17 01:01:18.858214 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'holders': ['ceph--2051e43b--6678--567a--85ad--b7e1187d21ae-osd--block--2051e43b--6678--567a--85ad--b7e1187d21ae'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-ci08xi-DLx5-qkHP-ts8o-o30q-GsMF-9Vu8DA', 'scsi-0QEMU_QEMU_HARDDISK_386eb8af-61b6-405b-8873-9456a29b0ccf', 'scsi-SQEMU_QEMU_HARDDISK_386eb8af-61b6-405b-8873-9456a29b0ccf'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-17 01:01:18.858220 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-17 01:01:18.858226 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_66cad329-aa8c-4366-8769-2bca3a7bcb41', 'scsi-SQEMU_QEMU_HARDDISK_66cad329-aa8c-4366-8769-2bca3a7bcb41'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-17 01:01:18.858231 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-17 01:01:18.858236 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-01-17-00-03-31-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-17 01:01:18.858252 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-17 01:01:18.858257 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-17 01:01:18.858265 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-17 01:01:18.858270 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-17 01:01:18.858274 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-17 01:01:18.858285 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-17 01:01:18.858294 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_40676af1-bb63-41c0-bff5-9ddc0a326d9b', 'scsi-SQEMU_QEMU_HARDDISK_40676af1-bb63-41c0-bff5-9ddc0a326d9b'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_40676af1-bb63-41c0-bff5-9ddc0a326d9b-part1', 'scsi-SQEMU_QEMU_HARDDISK_40676af1-bb63-41c0-bff5-9ddc0a326d9b-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_40676af1-bb63-41c0-bff5-9ddc0a326d9b-part14', 'scsi-SQEMU_QEMU_HARDDISK_40676af1-bb63-41c0-bff5-9ddc0a326d9b-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_40676af1-bb63-41c0-bff5-9ddc0a326d9b-part15', 'scsi-SQEMU_QEMU_HARDDISK_40676af1-bb63-41c0-bff5-9ddc0a326d9b-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_40676af1-bb63-41c0-bff5-9ddc0a326d9b-part16', 'scsi-SQEMU_QEMU_HARDDISK_40676af1-bb63-41c0-bff5-9ddc0a326d9b-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-17 01:01:18.858303 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'holders': ['ceph--6f2a493f--ee42--5e89--bc68--fb4f7dc1b165-osd--block--6f2a493f--ee42--5e89--bc68--fb4f7dc1b165'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-3KxdAO-CxAd-wUwe-i40h-hs1c-cSGa-f2Ve6g', 'scsi-0QEMU_QEMU_HARDDISK_89953a4d-629d-4187-87cb-8eaa4172afa2', 'scsi-SQEMU_QEMU_HARDDISK_89953a4d-629d-4187-87cb-8eaa4172afa2'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-17 01:01:18.858308 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'holders': ['ceph--fbc9b557--fafa--5136--b4c6--7d286dd557bb-osd--block--fbc9b557--fafa--5136--b4c6--7d286dd557bb'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-M9wEuU-Ap7d-T4FW-KZtp-Suyy-BaOI-zarCMP', 'scsi-0QEMU_QEMU_HARDDISK_bd9e2794-f462-41d3-bb22-ac4c4b73281f', 'scsi-SQEMU_QEMU_HARDDISK_bd9e2794-f462-41d3-bb22-ac4c4b73281f'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-17 01:01:18.858316 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1215eb05-d4be-4bfd-8c82-e464703dc233', 'scsi-SQEMU_QEMU_HARDDISK_1215eb05-d4be-4bfd-8c82-e464703dc233'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-17 01:01:18.858321 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-01-17-00-03-33-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-17 01:01:18.858326 | orchestrator | skipping: [testbed-node-3] 2026-01-17 01:01:18.858330 | orchestrator | skipping: [testbed-node-4] 2026-01-17 01:01:18.858335 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--a3dfbdd8--de3c--56f7--9997--9a9b5f483001-osd--block--a3dfbdd8--de3c--56f7--9997--9a9b5f483001', 'dm-uuid-LVM-Hs7oUEeU8ADSWmx04CKn6SuMMp8eUWZStt7UHRd6e2EapFzMVikTSwSmjihiJjrs'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-01-17 01:01:18.858345 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--68934a0c--2b18--58d2--8851--459d4d664360-osd--block--68934a0c--2b18--58d2--8851--459d4d664360', 'dm-uuid-LVM-JzKU7Yaxauxxeo3x93Z5swIT25bbKFsjQssE989LuaK4h22b4I0YBNAYmVraKBR4'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-01-17 01:01:18.858354 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-17 01:01:18.858361 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-17 01:01:18.858368 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-17 01:01:18.858374 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-17 01:01:18.858384 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-17 01:01:18.858390 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-17 01:01:18.858397 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-17 01:01:18.858403 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-17 01:01:18.858415 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4a12610b-6fe3-4cad-9944-f8a257ec9d82', 'scsi-SQEMU_QEMU_HARDDISK_4a12610b-6fe3-4cad-9944-f8a257ec9d82'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4a12610b-6fe3-4cad-9944-f8a257ec9d82-part1', 'scsi-SQEMU_QEMU_HARDDISK_4a12610b-6fe3-4cad-9944-f8a257ec9d82-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4a12610b-6fe3-4cad-9944-f8a257ec9d82-part14', 'scsi-SQEMU_QEMU_HARDDISK_4a12610b-6fe3-4cad-9944-f8a257ec9d82-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4a12610b-6fe3-4cad-9944-f8a257ec9d82-part15', 'scsi-SQEMU_QEMU_HARDDISK_4a12610b-6fe3-4cad-9944-f8a257ec9d82-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4a12610b-6fe3-4cad-9944-f8a257ec9d82-part16', 'scsi-SQEMU_QEMU_HARDDISK_4a12610b-6fe3-4cad-9944-f8a257ec9d82-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-17 01:01:18.858430 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'holders': ['ceph--a3dfbdd8--de3c--56f7--9997--9a9b5f483001-osd--block--a3dfbdd8--de3c--56f7--9997--9a9b5f483001'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-bjzIvh-K4bB-3USD-1n7N-IeCp-1up8-m3jgq6', 'scsi-0QEMU_QEMU_HARDDISK_653651ff-f0c3-4f93-a415-b7bde2938506', 'scsi-SQEMU_QEMU_HARDDISK_653651ff-f0c3-4f93-a415-b7bde2938506'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-17 01:01:18.858437 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'holders': ['ceph--68934a0c--2b18--58d2--8851--459d4d664360-osd--block--68934a0c--2b18--58d2--8851--459d4d664360'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-sj7qFM-Ltli-Ke0E-lNxX-aEZ4-pO1J-ftJ1GB', 'scsi-0QEMU_QEMU_HARDDISK_3748448b-4cb4-41ff-a93c-c2a900d49ce0', 'scsi-SQEMU_QEMU_HARDDISK_3748448b-4cb4-41ff-a93c-c2a900d49ce0'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-17 01:01:18.858444 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b2725b1a-ab02-479a-b1d7-829717bc50e1', 'scsi-SQEMU_QEMU_HARDDISK_b2725b1a-ab02-479a-b1d7-829717bc50e1'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-17 01:01:18.858458 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-01-17-00-03-29-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-17 01:01:18.858464 | orchestrator | skipping: [testbed-node-5] 2026-01-17 01:01:18.858470 | orchestrator | 2026-01-17 01:01:18.858476 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-01-17 01:01:18.858483 | orchestrator | Saturday 17 January 2026 00:59:21 +0000 (0:00:00.625) 0:00:17.271 ****** 2026-01-17 01:01:18.858490 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--c5f49b22--d40f--5ab7--98f7--9762e23da2c0-osd--block--c5f49b22--d40f--5ab7--98f7--9762e23da2c0', 'dm-uuid-LVM-QaFsaK8PUscqv52QG7rZWQsM1ITbmCNtBg9UmwnkCU0TgTFpgJE46eQvvR1UIOjf'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-17 01:01:18.858505 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--2051e43b--6678--567a--85ad--b7e1187d21ae-osd--block--2051e43b--6678--567a--85ad--b7e1187d21ae', 'dm-uuid-LVM-Md7et7hVBu5ntN3bevHsnjBkleVswA1X1WLsiCL62gGz9fZiSf7sD18qnz4rPMBd'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-17 01:01:18.858512 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-17 01:01:18.858519 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-17 01:01:18.858527 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-17 01:01:18.858547 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-17 01:01:18.858558 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-17 01:01:18.858565 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-17 01:01:18.858576 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-17 01:01:18.858581 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-17 01:01:18.858594 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a81b0aae-ecd3-46bc-81d3-c119638f529b', 'scsi-SQEMU_QEMU_HARDDISK_a81b0aae-ecd3-46bc-81d3-c119638f529b'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a81b0aae-ecd3-46bc-81d3-c119638f529b-part1', 'scsi-SQEMU_QEMU_HARDDISK_a81b0aae-ecd3-46bc-81d3-c119638f529b-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a81b0aae-ecd3-46bc-81d3-c119638f529b-part14', 'scsi-SQEMU_QEMU_HARDDISK_a81b0aae-ecd3-46bc-81d3-c119638f529b-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a81b0aae-ecd3-46bc-81d3-c119638f529b-part15', 'scsi-SQEMU_QEMU_HARDDISK_a81b0aae-ecd3-46bc-81d3-c119638f529b-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a81b0aae-ecd3-46bc-81d3-c119638f529b-part16', 'scsi-SQEMU_QEMU_HARDDISK_a81b0aae-ecd3-46bc-81d3-c119638f529b-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-17 01:01:18.858607 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--6f2a493f--ee42--5e89--bc68--fb4f7dc1b165-osd--block--6f2a493f--ee42--5e89--bc68--fb4f7dc1b165', 'dm-uuid-LVM-nFFSrCL2nvETfTYSLcEWw2ku767Ad4TlanSeVjPGOYbtNyp2dOrEmthQag04Qlfw'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-17 01:01:18.858617 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--c5f49b22--d40f--5ab7--98f7--9762e23da2c0-osd--block--c5f49b22--d40f--5ab7--98f7--9762e23da2c0'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-RMJI3b-05hW-7xpG-f9bN-7LlA-F5wA-8B2W4U', 'scsi-0QEMU_QEMU_HARDDISK_03c99a05-96d9-4471-aa9e-2837c3fbd541', 'scsi-SQEMU_QEMU_HARDDISK_03c99a05-96d9-4471-aa9e-2837c3fbd541'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-17 01:01:18.858625 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--fbc9b557--fafa--5136--b4c6--7d286dd557bb-osd--block--fbc9b557--fafa--5136--b4c6--7d286dd557bb', 'dm-uuid-LVM-tvbYC5qdW0xeFSGscnFgtuguYTc6vFsjyuP8eHblF9gDORksVycTX3WWlG9BStgP'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-17 01:01:18.858640 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--2051e43b--6678--567a--85ad--b7e1187d21ae-osd--block--2051e43b--6678--567a--85ad--b7e1187d21ae'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-ci08xi-DLx5-qkHP-ts8o-o30q-GsMF-9Vu8DA', 'scsi-0QEMU_QEMU_HARDDISK_386eb8af-61b6-405b-8873-9456a29b0ccf', 'scsi-SQEMU_QEMU_HARDDISK_386eb8af-61b6-405b-8873-9456a29b0ccf'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-17 01:01:18.858646 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-17 01:01:18.858653 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_66cad329-aa8c-4366-8769-2bca3a7bcb41', 'scsi-SQEMU_QEMU_HARDDISK_66cad329-aa8c-4366-8769-2bca3a7bcb41'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-17 01:01:18.858663 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-17 01:01:18.858670 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-01-17-00-03-31-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-17 01:01:18.858746 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-17 01:01:18.858751 | orchestrator | skipping: [testbed-node-3] 2026-01-17 01:01:18.858761 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-17 01:01:18.858765 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-17 01:01:18.858769 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-17 01:01:18.858777 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-17 01:01:18.858781 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--a3dfbdd8--de3c--56f7--9997--9a9b5f483001-osd--block--a3dfbdd8--de3c--56f7--9997--9a9b5f483001', 'dm-uuid-LVM-Hs7oUEeU8ADSWmx04CKn6SuMMp8eUWZStt7UHRd6e2EapFzMVikTSwSmjihiJjrs'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-17 01:01:18.858789 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-17 01:01:18.858797 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--68934a0c--2b18--58d2--8851--459d4d664360-osd--block--68934a0c--2b18--58d2--8851--459d4d664360', 'dm-uuid-LVM-JzKU7Yaxauxxeo3x93Z5swIT25bbKFsjQssE989LuaK4h22b4I0YBNAYmVraKBR4'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-17 01:01:18.858805 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_40676af1-bb63-41c0-bff5-9ddc0a326d9b', 'scsi-SQEMU_QEMU_HARDDISK_40676af1-bb63-41c0-bff5-9ddc0a326d9b'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_40676af1-bb63-41c0-bff5-9ddc0a326d9b-part1', 'scsi-SQEMU_QEMU_HARDDISK_40676af1-bb63-41c0-bff5-9ddc0a326d9b-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_40676af1-bb63-41c0-bff5-9ddc0a326d9b-part14', 'scsi-SQEMU_QEMU_HARDDISK_40676af1-bb63-41c0-bff5-9ddc0a326d9b-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_40676af1-bb63-41c0-bff5-9ddc0a326d9b-part15', 'scsi-SQEMU_QEMU_HARDDISK_40676af1-bb63-41c0-bff5-9ddc0a326d9b-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_40676af1-bb63-41c0-bff5-9ddc0a326d9b-part16', 'scsi-SQEMU_QEMU_HARDDISK_40676af1-bb63-41c0-bff5-9ddc0a326d9b-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-17 01:01:18.858810 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-17 01:01:18.858821 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--6f2a493f--ee42--5e89--bc68--fb4f7dc1b165-osd--block--6f2a493f--ee42--5e89--bc68--fb4f7dc1b165'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-3KxdAO-CxAd-wUwe-i40h-hs1c-cSGa-f2Ve6g', 'scsi-0QEMU_QEMU_HARDDISK_89953a4d-629d-4187-87cb-8eaa4172afa2', 'scsi-SQEMU_QEMU_HARDDISK_89953a4d-629d-4187-87cb-8eaa4172afa2'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-17 01:01:18.858825 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-17 01:01:18.858829 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--fbc9b557--fafa--5136--b4c6--7d286dd557bb-osd--block--fbc9b557--fafa--5136--b4c6--7d286dd557bb'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-M9wEuU-Ap7d-T4FW-KZtp-Suyy-BaOI-zarCMP', 'scsi-0QEMU_QEMU_HARDDISK_bd9e2794-f462-41d3-bb22-ac4c4b73281f', 'scsi-SQEMU_QEMU_HARDDISK_bd9e2794-f462-41d3-bb22-ac4c4b73281f'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-17 01:01:18.858835 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-17 01:01:18.858839 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1215eb05-d4be-4bfd-8c82-e464703dc233', 'scsi-SQEMU_QEMU_HARDDISK_1215eb05-d4be-4bfd-8c82-e464703dc233'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-17 01:01:18.858847 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-17 01:01:18.858855 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-01-17-00-03-33-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-17 01:01:18.858928 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-17 01:01:18.858933 | orchestrator | skipping: [testbed-node-4] 2026-01-17 01:01:18.858937 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-17 01:01:18.858945 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-17 01:01:18.858949 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-17 01:01:18.858961 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4a12610b-6fe3-4cad-9944-f8a257ec9d82', 'scsi-SQEMU_QEMU_HARDDISK_4a12610b-6fe3-4cad-9944-f8a257ec9d82'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4a12610b-6fe3-4cad-9944-f8a257ec9d82-part1', 'scsi-SQEMU_QEMU_HARDDISK_4a12610b-6fe3-4cad-9944-f8a257ec9d82-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4a12610b-6fe3-4cad-9944-f8a257ec9d82-part14', 'scsi-SQEMU_QEMU_HARDDISK_4a12610b-6fe3-4cad-9944-f8a257ec9d82-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4a12610b-6fe3-4cad-9944-f8a257ec9d82-part15', 'scsi-SQEMU_QEMU_HARDDISK_4a12610b-6fe3-4cad-9944-f8a257ec9d82-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4a12610b-6fe3-4cad-9944-f8a257ec9d82-part16', 'scsi-SQEMU_QEMU_HARDDISK_4a12610b-6fe3-4cad-9944-f8a257ec9d82-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-17 01:01:18.858969 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--a3dfbdd8--de3c--56f7--9997--9a9b5f483001-osd--block--a3dfbdd8--de3c--56f7--9997--9a9b5f483001'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-bjzIvh-K4bB-3USD-1n7N-IeCp-1up8-m3jgq6', 'scsi-0QEMU_QEMU_HARDDISK_653651ff-f0c3-4f93-a415-b7bde2938506', 'scsi-SQEMU_QEMU_HARDDISK_653651ff-f0c3-4f93-a415-b7bde2938506'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-17 01:01:18.858973 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--68934a0c--2b18--58d2--8851--459d4d664360-osd--block--68934a0c--2b18--58d2--8851--459d4d664360'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-sj7qFM-Ltli-Ke0E-lNxX-aEZ4-pO1J-ftJ1GB', 'scsi-0QEMU_QEMU_HARDDISK_3748448b-4cb4-41ff-a93c-c2a900d49ce0', 'scsi-SQEMU_QEMU_HARDDISK_3748448b-4cb4-41ff-a93c-c2a900d49ce0'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-17 01:01:18.858981 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b2725b1a-ab02-479a-b1d7-829717bc50e1', 'scsi-SQEMU_QEMU_HARDDISK_b2725b1a-ab02-479a-b1d7-829717bc50e1'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-17 01:01:18.858989 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-01-17-00-03-29-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-17 01:01:18.858993 | orchestrator | skipping: [testbed-node-5] 2026-01-17 01:01:18.858997 | orchestrator | 2026-01-17 01:01:18.859001 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-01-17 01:01:18.859005 | orchestrator | Saturday 17 January 2026 00:59:21 +0000 (0:00:00.589) 0:00:17.861 ****** 2026-01-17 01:01:18.859009 | orchestrator | ok: [testbed-node-3] 2026-01-17 01:01:18.859013 | orchestrator | ok: [testbed-node-4] 2026-01-17 01:01:18.859017 | orchestrator | ok: [testbed-node-5] 2026-01-17 01:01:18.859021 | orchestrator | 2026-01-17 01:01:18.859025 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-01-17 01:01:18.859028 | orchestrator | Saturday 17 January 2026 00:59:22 +0000 (0:00:00.746) 0:00:18.608 ****** 2026-01-17 01:01:18.859032 | orchestrator | ok: [testbed-node-3] 2026-01-17 01:01:18.859036 | orchestrator | ok: [testbed-node-4] 2026-01-17 01:01:18.859040 | orchestrator | ok: [testbed-node-5] 2026-01-17 01:01:18.859044 | orchestrator | 2026-01-17 01:01:18.859047 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-01-17 01:01:18.859051 | orchestrator | Saturday 17 January 2026 00:59:23 +0000 (0:00:00.531) 0:00:19.139 ****** 2026-01-17 01:01:18.859055 | orchestrator | ok: [testbed-node-3] 2026-01-17 01:01:18.859059 | orchestrator | ok: [testbed-node-4] 2026-01-17 01:01:18.859063 | orchestrator | ok: [testbed-node-5] 2026-01-17 01:01:18.859066 | orchestrator | 2026-01-17 01:01:18.859070 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-01-17 01:01:18.859074 | orchestrator | Saturday 17 January 2026 00:59:23 +0000 (0:00:00.684) 0:00:19.824 ****** 2026-01-17 01:01:18.859078 | orchestrator | skipping: [testbed-node-3] 2026-01-17 01:01:18.859081 | orchestrator | skipping: [testbed-node-4] 2026-01-17 01:01:18.859085 | orchestrator | skipping: [testbed-node-5] 2026-01-17 01:01:18.859089 | orchestrator | 2026-01-17 01:01:18.859096 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-01-17 01:01:18.859100 | orchestrator | Saturday 17 January 2026 00:59:24 +0000 (0:00:00.304) 0:00:20.128 ****** 2026-01-17 01:01:18.859104 | orchestrator | skipping: [testbed-node-3] 2026-01-17 01:01:18.859108 | orchestrator | skipping: [testbed-node-4] 2026-01-17 01:01:18.859111 | orchestrator | skipping: [testbed-node-5] 2026-01-17 01:01:18.859115 | orchestrator | 2026-01-17 01:01:18.859119 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-01-17 01:01:18.859126 | orchestrator | Saturday 17 January 2026 00:59:24 +0000 (0:00:00.417) 0:00:20.546 ****** 2026-01-17 01:01:18.859130 | orchestrator | skipping: [testbed-node-3] 2026-01-17 01:01:18.859134 | orchestrator | skipping: [testbed-node-4] 2026-01-17 01:01:18.859137 | orchestrator | skipping: [testbed-node-5] 2026-01-17 01:01:18.859141 | orchestrator | 2026-01-17 01:01:18.859145 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-01-17 01:01:18.859149 | orchestrator | Saturday 17 January 2026 00:59:24 +0000 (0:00:00.530) 0:00:21.077 ****** 2026-01-17 01:01:18.859153 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-0) 2026-01-17 01:01:18.859157 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-0) 2026-01-17 01:01:18.859161 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-1) 2026-01-17 01:01:18.859164 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-0) 2026-01-17 01:01:18.859168 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-1) 2026-01-17 01:01:18.859172 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-2) 2026-01-17 01:01:18.859176 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-1) 2026-01-17 01:01:18.859179 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-2) 2026-01-17 01:01:18.859183 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-2) 2026-01-17 01:01:18.859187 | orchestrator | 2026-01-17 01:01:18.859191 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-01-17 01:01:18.859195 | orchestrator | Saturday 17 January 2026 00:59:25 +0000 (0:00:00.889) 0:00:21.966 ****** 2026-01-17 01:01:18.859198 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-01-17 01:01:18.859202 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-01-17 01:01:18.859206 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-01-17 01:01:18.859210 | orchestrator | skipping: [testbed-node-3] 2026-01-17 01:01:18.859214 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2026-01-17 01:01:18.859217 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2026-01-17 01:01:18.859221 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2026-01-17 01:01:18.859225 | orchestrator | skipping: [testbed-node-4] 2026-01-17 01:01:18.859229 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2026-01-17 01:01:18.859233 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2026-01-17 01:01:18.859236 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2026-01-17 01:01:18.859240 | orchestrator | skipping: [testbed-node-5] 2026-01-17 01:01:18.859244 | orchestrator | 2026-01-17 01:01:18.859248 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-01-17 01:01:18.859252 | orchestrator | Saturday 17 January 2026 00:59:26 +0000 (0:00:00.434) 0:00:22.401 ****** 2026-01-17 01:01:18.859256 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-17 01:01:18.859260 | orchestrator | 2026-01-17 01:01:18.859264 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-01-17 01:01:18.859269 | orchestrator | Saturday 17 January 2026 00:59:27 +0000 (0:00:00.710) 0:00:23.111 ****** 2026-01-17 01:01:18.859275 | orchestrator | skipping: [testbed-node-3] 2026-01-17 01:01:18.859279 | orchestrator | skipping: [testbed-node-4] 2026-01-17 01:01:18.859283 | orchestrator | skipping: [testbed-node-5] 2026-01-17 01:01:18.859292 | orchestrator | 2026-01-17 01:01:18.859296 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-01-17 01:01:18.859300 | orchestrator | Saturday 17 January 2026 00:59:27 +0000 (0:00:00.326) 0:00:23.438 ****** 2026-01-17 01:01:18.859303 | orchestrator | skipping: [testbed-node-3] 2026-01-17 01:01:18.859307 | orchestrator | skipping: [testbed-node-4] 2026-01-17 01:01:18.859311 | orchestrator | skipping: [testbed-node-5] 2026-01-17 01:01:18.859315 | orchestrator | 2026-01-17 01:01:18.859318 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-01-17 01:01:18.859322 | orchestrator | Saturday 17 January 2026 00:59:27 +0000 (0:00:00.351) 0:00:23.789 ****** 2026-01-17 01:01:18.859326 | orchestrator | skipping: [testbed-node-3] 2026-01-17 01:01:18.859330 | orchestrator | skipping: [testbed-node-4] 2026-01-17 01:01:18.859334 | orchestrator | skipping: [testbed-node-5] 2026-01-17 01:01:18.859337 | orchestrator | 2026-01-17 01:01:18.859341 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-01-17 01:01:18.859345 | orchestrator | Saturday 17 January 2026 00:59:28 +0000 (0:00:00.326) 0:00:24.115 ****** 2026-01-17 01:01:18.859349 | orchestrator | ok: [testbed-node-3] 2026-01-17 01:01:18.859352 | orchestrator | ok: [testbed-node-4] 2026-01-17 01:01:18.859356 | orchestrator | ok: [testbed-node-5] 2026-01-17 01:01:18.859360 | orchestrator | 2026-01-17 01:01:18.859364 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-01-17 01:01:18.859368 | orchestrator | Saturday 17 January 2026 00:59:28 +0000 (0:00:00.919) 0:00:25.035 ****** 2026-01-17 01:01:18.859372 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-01-17 01:01:18.859375 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-01-17 01:01:18.859379 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-01-17 01:01:18.859383 | orchestrator | skipping: [testbed-node-3] 2026-01-17 01:01:18.859388 | orchestrator | 2026-01-17 01:01:18.859394 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-01-17 01:01:18.859400 | orchestrator | Saturday 17 January 2026 00:59:29 +0000 (0:00:00.363) 0:00:25.398 ****** 2026-01-17 01:01:18.859406 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-01-17 01:01:18.859411 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-01-17 01:01:18.859417 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-01-17 01:01:18.859423 | orchestrator | skipping: [testbed-node-3] 2026-01-17 01:01:18.859429 | orchestrator | 2026-01-17 01:01:18.859433 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-01-17 01:01:18.859437 | orchestrator | Saturday 17 January 2026 00:59:29 +0000 (0:00:00.376) 0:00:25.775 ****** 2026-01-17 01:01:18.859443 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-01-17 01:01:18.859447 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-01-17 01:01:18.859451 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-01-17 01:01:18.859454 | orchestrator | skipping: [testbed-node-3] 2026-01-17 01:01:18.859458 | orchestrator | 2026-01-17 01:01:18.859462 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-01-17 01:01:18.859466 | orchestrator | Saturday 17 January 2026 00:59:30 +0000 (0:00:00.384) 0:00:26.159 ****** 2026-01-17 01:01:18.859470 | orchestrator | ok: [testbed-node-3] 2026-01-17 01:01:18.859473 | orchestrator | ok: [testbed-node-4] 2026-01-17 01:01:18.859477 | orchestrator | ok: [testbed-node-5] 2026-01-17 01:01:18.859481 | orchestrator | 2026-01-17 01:01:18.859485 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-01-17 01:01:18.859489 | orchestrator | Saturday 17 January 2026 00:59:30 +0000 (0:00:00.362) 0:00:26.522 ****** 2026-01-17 01:01:18.859492 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-01-17 01:01:18.859496 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-01-17 01:01:18.859500 | orchestrator | ok: [testbed-node-5] => (item=0) 2026-01-17 01:01:18.859504 | orchestrator | 2026-01-17 01:01:18.859508 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-01-17 01:01:18.859517 | orchestrator | Saturday 17 January 2026 00:59:31 +0000 (0:00:00.625) 0:00:27.148 ****** 2026-01-17 01:01:18.859521 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-01-17 01:01:18.859525 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-01-17 01:01:18.859529 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-01-17 01:01:18.859532 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-01-17 01:01:18.859536 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-01-17 01:01:18.859540 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-01-17 01:01:18.859544 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-01-17 01:01:18.859548 | orchestrator | 2026-01-17 01:01:18.859551 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-01-17 01:01:18.859555 | orchestrator | Saturday 17 January 2026 00:59:32 +0000 (0:00:01.004) 0:00:28.152 ****** 2026-01-17 01:01:18.859559 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-01-17 01:01:18.859563 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-01-17 01:01:18.859567 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-01-17 01:01:18.859571 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-01-17 01:01:18.859576 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-01-17 01:01:18.859580 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-01-17 01:01:18.859587 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-01-17 01:01:18.859592 | orchestrator | 2026-01-17 01:01:18.859596 | orchestrator | TASK [Include tasks from the ceph-osd role] ************************************ 2026-01-17 01:01:18.859600 | orchestrator | Saturday 17 January 2026 00:59:34 +0000 (0:00:02.056) 0:00:30.209 ****** 2026-01-17 01:01:18.859605 | orchestrator | skipping: [testbed-node-3] 2026-01-17 01:01:18.859609 | orchestrator | skipping: [testbed-node-4] 2026-01-17 01:01:18.859613 | orchestrator | included: /ansible/tasks/openstack_config.yml for testbed-node-5 2026-01-17 01:01:18.859618 | orchestrator | 2026-01-17 01:01:18.859622 | orchestrator | TASK [create openstack pool(s)] ************************************************ 2026-01-17 01:01:18.859626 | orchestrator | Saturday 17 January 2026 00:59:34 +0000 (0:00:00.389) 0:00:30.598 ****** 2026-01-17 01:01:18.859631 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'backups', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-01-17 01:01:18.859638 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'volumes', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-01-17 01:01:18.859642 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'images', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-01-17 01:01:18.859647 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'metrics', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-01-17 01:01:18.859654 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'vms', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-01-17 01:01:18.859662 | orchestrator | 2026-01-17 01:01:18.859666 | orchestrator | TASK [generate keys] *********************************************************** 2026-01-17 01:01:18.859671 | orchestrator | Saturday 17 January 2026 01:00:20 +0000 (0:00:45.976) 0:01:16.574 ****** 2026-01-17 01:01:18.859676 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-17 01:01:18.859680 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-17 01:01:18.859684 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-17 01:01:18.859689 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-17 01:01:18.859693 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-17 01:01:18.859697 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-17 01:01:18.859702 | orchestrator | changed: [testbed-node-5 -> {{ groups[mon_group_name][0] }}] 2026-01-17 01:01:18.859706 | orchestrator | 2026-01-17 01:01:18.859710 | orchestrator | TASK [get keys from monitors] ************************************************** 2026-01-17 01:01:18.859714 | orchestrator | Saturday 17 January 2026 01:00:45 +0000 (0:00:24.718) 0:01:41.292 ****** 2026-01-17 01:01:18.859733 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-17 01:01:18.859738 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-17 01:01:18.859742 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-17 01:01:18.859746 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-17 01:01:18.859750 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-17 01:01:18.859754 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-17 01:01:18.859759 | orchestrator | ok: [testbed-node-5 -> {{ groups.get(mon_group_name)[0] }}] 2026-01-17 01:01:18.859763 | orchestrator | 2026-01-17 01:01:18.859767 | orchestrator | TASK [copy ceph key(s) if needed] ********************************************** 2026-01-17 01:01:18.859771 | orchestrator | Saturday 17 January 2026 01:00:57 +0000 (0:00:12.148) 0:01:53.441 ****** 2026-01-17 01:01:18.859776 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-17 01:01:18.859780 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-01-17 01:01:18.859785 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-01-17 01:01:18.859789 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-17 01:01:18.859793 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-01-17 01:01:18.859800 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-01-17 01:01:18.859805 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-17 01:01:18.859809 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-01-17 01:01:18.859813 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-01-17 01:01:18.859818 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-17 01:01:18.859822 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-01-17 01:01:18.859826 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-01-17 01:01:18.859831 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-17 01:01:18.859835 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-01-17 01:01:18.859843 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-01-17 01:01:18.859847 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-17 01:01:18.859852 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-01-17 01:01:18.859856 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-01-17 01:01:18.859861 | orchestrator | changed: [testbed-node-5 -> {{ item.1 }}] 2026-01-17 01:01:18.859865 | orchestrator | 2026-01-17 01:01:18.859869 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-17 01:01:18.859874 | orchestrator | testbed-node-3 : ok=25  changed=0 unreachable=0 failed=0 skipped=28  rescued=0 ignored=0 2026-01-17 01:01:18.859879 | orchestrator | testbed-node-4 : ok=18  changed=0 unreachable=0 failed=0 skipped=21  rescued=0 ignored=0 2026-01-17 01:01:18.859884 | orchestrator | testbed-node-5 : ok=23  changed=3  unreachable=0 failed=0 skipped=20  rescued=0 ignored=0 2026-01-17 01:01:18.859889 | orchestrator | 2026-01-17 01:01:18.859893 | orchestrator | 2026-01-17 01:01:18.859897 | orchestrator | 2026-01-17 01:01:18.859902 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-17 01:01:18.859905 | orchestrator | Saturday 17 January 2026 01:01:16 +0000 (0:00:18.795) 0:02:12.237 ****** 2026-01-17 01:01:18.859913 | orchestrator | =============================================================================== 2026-01-17 01:01:18.859917 | orchestrator | create openstack pool(s) ----------------------------------------------- 45.98s 2026-01-17 01:01:18.859920 | orchestrator | generate keys ---------------------------------------------------------- 24.72s 2026-01-17 01:01:18.859924 | orchestrator | copy ceph key(s) if needed --------------------------------------------- 18.80s 2026-01-17 01:01:18.859928 | orchestrator | get keys from monitors ------------------------------------------------- 12.15s 2026-01-17 01:01:18.859931 | orchestrator | ceph-facts : Find a running mon container ------------------------------- 2.34s 2026-01-17 01:01:18.859935 | orchestrator | ceph-facts : Set_fact ceph_admin_command -------------------------------- 2.06s 2026-01-17 01:01:18.859939 | orchestrator | ceph-facts : Get current fsid if cluster is already running ------------- 1.72s 2026-01-17 01:01:18.859942 | orchestrator | ceph-facts : Set_fact ceph_run_cmd -------------------------------------- 1.00s 2026-01-17 01:01:18.859946 | orchestrator | ceph-facts : Set_fact _radosgw_address to radosgw_address --------------- 0.92s 2026-01-17 01:01:18.859950 | orchestrator | ceph-facts : Set_fact _monitor_addresses - ipv4 ------------------------- 0.89s 2026-01-17 01:01:18.859953 | orchestrator | ceph-facts : Check if the ceph mon socket is in-use --------------------- 0.81s 2026-01-17 01:01:18.859957 | orchestrator | ceph-facts : Check if podman binary is present -------------------------- 0.80s 2026-01-17 01:01:18.859961 | orchestrator | ceph-facts : Check if the ceph conf exists ------------------------------ 0.75s 2026-01-17 01:01:18.859964 | orchestrator | ceph-facts : Import_tasks set_radosgw_address.yml ----------------------- 0.71s 2026-01-17 01:01:18.859968 | orchestrator | ceph-facts : Read osd pool default crush rule --------------------------- 0.68s 2026-01-17 01:01:18.859972 | orchestrator | ceph-facts : Set_fact monitor_name ansible_facts['hostname'] ------------ 0.67s 2026-01-17 01:01:18.859976 | orchestrator | ceph-facts : Check if it is atomic host --------------------------------- 0.65s 2026-01-17 01:01:18.859979 | orchestrator | ceph-facts : Include facts.yml ------------------------------------------ 0.65s 2026-01-17 01:01:18.859983 | orchestrator | ceph-facts : Check for a ceph mon socket -------------------------------- 0.63s 2026-01-17 01:01:18.859987 | orchestrator | ceph-facts : Collect existed devices ------------------------------------ 0.63s 2026-01-17 01:01:18.859990 | orchestrator | 2026-01-17 01:01:18 | INFO  | Task 72c21961-be6e-47db-9722-e81aadb3b3af is in state STARTED 2026-01-17 01:01:18.861067 | orchestrator | 2026-01-17 01:01:18 | INFO  | Task 4241e033-f5cb-4e76-9aba-7436e17279a2 is in state STARTED 2026-01-17 01:01:18.861179 | orchestrator | 2026-01-17 01:01:18 | INFO  | Wait 1 second(s) until the next check 2026-01-17 01:01:21.922973 | orchestrator | 2026-01-17 01:01:21 | INFO  | Task fecfcce5-dc9e-433c-b21d-55e70e0848b6 is in state STARTED 2026-01-17 01:01:21.925121 | orchestrator | 2026-01-17 01:01:21 | INFO  | Task 72c21961-be6e-47db-9722-e81aadb3b3af is in state STARTED 2026-01-17 01:01:21.927532 | orchestrator | 2026-01-17 01:01:21 | INFO  | Task 4241e033-f5cb-4e76-9aba-7436e17279a2 is in state STARTED 2026-01-17 01:01:21.927910 | orchestrator | 2026-01-17 01:01:21 | INFO  | Wait 1 second(s) until the next check 2026-01-17 01:01:24.977609 | orchestrator | 2026-01-17 01:01:24 | INFO  | Task fecfcce5-dc9e-433c-b21d-55e70e0848b6 is in state STARTED 2026-01-17 01:01:24.979679 | orchestrator | 2026-01-17 01:01:24 | INFO  | Task 72c21961-be6e-47db-9722-e81aadb3b3af is in state STARTED 2026-01-17 01:01:24.982126 | orchestrator | 2026-01-17 01:01:24 | INFO  | Task 4241e033-f5cb-4e76-9aba-7436e17279a2 is in state STARTED 2026-01-17 01:01:24.982204 | orchestrator | 2026-01-17 01:01:24 | INFO  | Wait 1 second(s) until the next check 2026-01-17 01:01:28.037893 | orchestrator | 2026-01-17 01:01:28 | INFO  | Task fecfcce5-dc9e-433c-b21d-55e70e0848b6 is in state STARTED 2026-01-17 01:01:28.040680 | orchestrator | 2026-01-17 01:01:28 | INFO  | Task 72c21961-be6e-47db-9722-e81aadb3b3af is in state STARTED 2026-01-17 01:01:28.043267 | orchestrator | 2026-01-17 01:01:28 | INFO  | Task 4241e033-f5cb-4e76-9aba-7436e17279a2 is in state STARTED 2026-01-17 01:01:28.043362 | orchestrator | 2026-01-17 01:01:28 | INFO  | Wait 1 second(s) until the next check 2026-01-17 01:01:31.087513 | orchestrator | 2026-01-17 01:01:31 | INFO  | Task fecfcce5-dc9e-433c-b21d-55e70e0848b6 is in state STARTED 2026-01-17 01:01:31.088607 | orchestrator | 2026-01-17 01:01:31 | INFO  | Task 72c21961-be6e-47db-9722-e81aadb3b3af is in state STARTED 2026-01-17 01:01:31.090855 | orchestrator | 2026-01-17 01:01:31 | INFO  | Task 4241e033-f5cb-4e76-9aba-7436e17279a2 is in state STARTED 2026-01-17 01:01:31.090997 | orchestrator | 2026-01-17 01:01:31 | INFO  | Wait 1 second(s) until the next check 2026-01-17 01:01:34.139690 | orchestrator | 2026-01-17 01:01:34 | INFO  | Task fecfcce5-dc9e-433c-b21d-55e70e0848b6 is in state SUCCESS 2026-01-17 01:01:34.142838 | orchestrator | 2026-01-17 01:01:34.142921 | orchestrator | 2026-01-17 01:01:34.142929 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-17 01:01:34.142937 | orchestrator | 2026-01-17 01:01:34.142943 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-01-17 01:01:34.142964 | orchestrator | Saturday 17 January 2026 00:59:49 +0000 (0:00:00.264) 0:00:00.264 ****** 2026-01-17 01:01:34.142970 | orchestrator | ok: [testbed-node-0] 2026-01-17 01:01:34.142977 | orchestrator | ok: [testbed-node-1] 2026-01-17 01:01:34.142982 | orchestrator | ok: [testbed-node-2] 2026-01-17 01:01:34.142988 | orchestrator | 2026-01-17 01:01:34.143063 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-17 01:01:34.143071 | orchestrator | Saturday 17 January 2026 00:59:49 +0000 (0:00:00.326) 0:00:00.590 ****** 2026-01-17 01:01:34.143078 | orchestrator | ok: [testbed-node-0] => (item=enable_horizon_True) 2026-01-17 01:01:34.143085 | orchestrator | ok: [testbed-node-1] => (item=enable_horizon_True) 2026-01-17 01:01:34.143091 | orchestrator | ok: [testbed-node-2] => (item=enable_horizon_True) 2026-01-17 01:01:34.143097 | orchestrator | 2026-01-17 01:01:34.143112 | orchestrator | PLAY [Apply role horizon] ****************************************************** 2026-01-17 01:01:34.143118 | orchestrator | 2026-01-17 01:01:34.143481 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-01-17 01:01:34.143513 | orchestrator | Saturday 17 January 2026 00:59:49 +0000 (0:00:00.436) 0:00:01.026 ****** 2026-01-17 01:01:34.143520 | orchestrator | included: /ansible/roles/horizon/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-17 01:01:34.143527 | orchestrator | 2026-01-17 01:01:34.143533 | orchestrator | TASK [horizon : Ensuring config directories exist] ***************************** 2026-01-17 01:01:34.143539 | orchestrator | Saturday 17 January 2026 00:59:50 +0000 (0:00:00.509) 0:00:01.536 ****** 2026-01-17 01:01:34.143551 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-01-17 01:01:34.143585 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-01-17 01:01:34.143598 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-01-17 01:01:34.143604 | orchestrator | 2026-01-17 01:01:34.143610 | orchestrator | TASK [horizon : Set empty custom policy] *************************************** 2026-01-17 01:01:34.143616 | orchestrator | Saturday 17 January 2026 00:59:51 +0000 (0:00:01.174) 0:00:02.711 ****** 2026-01-17 01:01:34.143622 | orchestrator | ok: [testbed-node-0] 2026-01-17 01:01:34.143628 | orchestrator | ok: [testbed-node-1] 2026-01-17 01:01:34.143634 | orchestrator | ok: [testbed-node-2] 2026-01-17 01:01:34.143639 | orchestrator | 2026-01-17 01:01:34.143646 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-01-17 01:01:34.143651 | orchestrator | Saturday 17 January 2026 00:59:51 +0000 (0:00:00.465) 0:00:03.176 ****** 2026-01-17 01:01:34.143657 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'cloudkitty', 'enabled': False})  2026-01-17 01:01:34.143669 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'heat', 'enabled': 'no'})  2026-01-17 01:01:34.143675 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'ironic', 'enabled': False})  2026-01-17 01:01:34.143684 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'masakari', 'enabled': False})  2026-01-17 01:01:34.143720 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'mistral', 'enabled': False})  2026-01-17 01:01:34.143726 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'tacker', 'enabled': False})  2026-01-17 01:01:34.143732 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'trove', 'enabled': False})  2026-01-17 01:01:34.143738 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'watcher', 'enabled': False})  2026-01-17 01:01:34.143743 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'cloudkitty', 'enabled': False})  2026-01-17 01:01:34.143749 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'heat', 'enabled': 'no'})  2026-01-17 01:01:34.143755 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'ironic', 'enabled': False})  2026-01-17 01:01:34.143761 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'masakari', 'enabled': False})  2026-01-17 01:01:34.143767 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'mistral', 'enabled': False})  2026-01-17 01:01:34.143773 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'tacker', 'enabled': False})  2026-01-17 01:01:34.143778 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'trove', 'enabled': False})  2026-01-17 01:01:34.143784 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'watcher', 'enabled': False})  2026-01-17 01:01:34.143789 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'cloudkitty', 'enabled': False})  2026-01-17 01:01:34.143795 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'heat', 'enabled': 'no'})  2026-01-17 01:01:34.143800 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'ironic', 'enabled': False})  2026-01-17 01:01:34.143806 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'masakari', 'enabled': False})  2026-01-17 01:01:34.143812 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'mistral', 'enabled': False})  2026-01-17 01:01:34.143817 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'tacker', 'enabled': False})  2026-01-17 01:01:34.143823 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'trove', 'enabled': False})  2026-01-17 01:01:34.143829 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'watcher', 'enabled': False})  2026-01-17 01:01:34.143836 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'ceilometer', 'enabled': 'yes'}) 2026-01-17 01:01:34.143844 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'cinder', 'enabled': 'yes'}) 2026-01-17 01:01:34.143849 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'designate', 'enabled': True}) 2026-01-17 01:01:34.143855 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'glance', 'enabled': True}) 2026-01-17 01:01:34.143862 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'keystone', 'enabled': True}) 2026-01-17 01:01:34.143867 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'magnum', 'enabled': True}) 2026-01-17 01:01:34.143873 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'manila', 'enabled': True}) 2026-01-17 01:01:34.143879 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'neutron', 'enabled': True}) 2026-01-17 01:01:34.143885 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'nova', 'enabled': True}) 2026-01-17 01:01:34.143892 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'octavia', 'enabled': True}) 2026-01-17 01:01:34.143902 | orchestrator | 2026-01-17 01:01:34.143908 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-01-17 01:01:34.143913 | orchestrator | Saturday 17 January 2026 00:59:52 +0000 (0:00:00.781) 0:00:03.958 ****** 2026-01-17 01:01:34.143919 | orchestrator | ok: [testbed-node-0] 2026-01-17 01:01:34.143925 | orchestrator | ok: [testbed-node-1] 2026-01-17 01:01:34.143932 | orchestrator | ok: [testbed-node-2] 2026-01-17 01:01:34.143938 | orchestrator | 2026-01-17 01:01:34.143944 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-01-17 01:01:34.143950 | orchestrator | Saturday 17 January 2026 00:59:53 +0000 (0:00:00.322) 0:00:04.280 ****** 2026-01-17 01:01:34.143956 | orchestrator | skipping: [testbed-node-0] 2026-01-17 01:01:34.143963 | orchestrator | 2026-01-17 01:01:34.143973 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-01-17 01:01:34.143979 | orchestrator | Saturday 17 January 2026 00:59:53 +0000 (0:00:00.130) 0:00:04.411 ****** 2026-01-17 01:01:34.143988 | orchestrator | skipping: [testbed-node-0] 2026-01-17 01:01:34.143995 | orchestrator | skipping: [testbed-node-1] 2026-01-17 01:01:34.144000 | orchestrator | skipping: [testbed-node-2] 2026-01-17 01:01:34.144005 | orchestrator | 2026-01-17 01:01:34.144011 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-01-17 01:01:34.144023 | orchestrator | Saturday 17 January 2026 00:59:53 +0000 (0:00:00.545) 0:00:04.956 ****** 2026-01-17 01:01:34.144042 | orchestrator | ok: [testbed-node-0] 2026-01-17 01:01:34.144049 | orchestrator | ok: [testbed-node-1] 2026-01-17 01:01:34.144055 | orchestrator | ok: [testbed-node-2] 2026-01-17 01:01:34.144061 | orchestrator | 2026-01-17 01:01:34.144067 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-01-17 01:01:34.144073 | orchestrator | Saturday 17 January 2026 00:59:54 +0000 (0:00:00.316) 0:00:05.273 ****** 2026-01-17 01:01:34.144079 | orchestrator | skipping: [testbed-node-0] 2026-01-17 01:01:34.144085 | orchestrator | 2026-01-17 01:01:34.144092 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-01-17 01:01:34.144098 | orchestrator | Saturday 17 January 2026 00:59:54 +0000 (0:00:00.137) 0:00:05.411 ****** 2026-01-17 01:01:34.144104 | orchestrator | skipping: [testbed-node-0] 2026-01-17 01:01:34.144110 | orchestrator | skipping: [testbed-node-1] 2026-01-17 01:01:34.144116 | orchestrator | skipping: [testbed-node-2] 2026-01-17 01:01:34.144121 | orchestrator | 2026-01-17 01:01:34.144127 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-01-17 01:01:34.144133 | orchestrator | Saturday 17 January 2026 00:59:54 +0000 (0:00:00.286) 0:00:05.698 ****** 2026-01-17 01:01:34.144139 | orchestrator | ok: [testbed-node-0] 2026-01-17 01:01:34.144145 | orchestrator | ok: [testbed-node-1] 2026-01-17 01:01:34.144151 | orchestrator | ok: [testbed-node-2] 2026-01-17 01:01:34.144157 | orchestrator | 2026-01-17 01:01:34.144163 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-01-17 01:01:34.144169 | orchestrator | Saturday 17 January 2026 00:59:54 +0000 (0:00:00.329) 0:00:06.027 ****** 2026-01-17 01:01:34.144564 | orchestrator | skipping: [testbed-node-0] 2026-01-17 01:01:34.144583 | orchestrator | 2026-01-17 01:01:34.144590 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-01-17 01:01:34.144597 | orchestrator | Saturday 17 January 2026 00:59:55 +0000 (0:00:00.487) 0:00:06.515 ****** 2026-01-17 01:01:34.144603 | orchestrator | skipping: [testbed-node-0] 2026-01-17 01:01:34.144610 | orchestrator | skipping: [testbed-node-1] 2026-01-17 01:01:34.144616 | orchestrator | skipping: [testbed-node-2] 2026-01-17 01:01:34.144622 | orchestrator | 2026-01-17 01:01:34.144627 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-01-17 01:01:34.144634 | orchestrator | Saturday 17 January 2026 00:59:55 +0000 (0:00:00.358) 0:00:06.873 ****** 2026-01-17 01:01:34.144640 | orchestrator | ok: [testbed-node-0] 2026-01-17 01:01:34.144647 | orchestrator | ok: [testbed-node-1] 2026-01-17 01:01:34.144665 | orchestrator | ok: [testbed-node-2] 2026-01-17 01:01:34.144671 | orchestrator | 2026-01-17 01:01:34.144677 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-01-17 01:01:34.144683 | orchestrator | Saturday 17 January 2026 00:59:55 +0000 (0:00:00.326) 0:00:07.200 ****** 2026-01-17 01:01:34.144689 | orchestrator | skipping: [testbed-node-0] 2026-01-17 01:01:34.144718 | orchestrator | 2026-01-17 01:01:34.144724 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-01-17 01:01:34.144730 | orchestrator | Saturday 17 January 2026 00:59:56 +0000 (0:00:00.161) 0:00:07.362 ****** 2026-01-17 01:01:34.144736 | orchestrator | skipping: [testbed-node-0] 2026-01-17 01:01:34.144741 | orchestrator | skipping: [testbed-node-1] 2026-01-17 01:01:34.144747 | orchestrator | skipping: [testbed-node-2] 2026-01-17 01:01:34.144754 | orchestrator | 2026-01-17 01:01:34.144760 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-01-17 01:01:34.144766 | orchestrator | Saturday 17 January 2026 00:59:56 +0000 (0:00:00.285) 0:00:07.647 ****** 2026-01-17 01:01:34.144773 | orchestrator | ok: [testbed-node-0] 2026-01-17 01:01:34.144778 | orchestrator | ok: [testbed-node-1] 2026-01-17 01:01:34.144784 | orchestrator | ok: [testbed-node-2] 2026-01-17 01:01:34.144790 | orchestrator | 2026-01-17 01:01:34.144795 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-01-17 01:01:34.144801 | orchestrator | Saturday 17 January 2026 00:59:56 +0000 (0:00:00.525) 0:00:08.173 ****** 2026-01-17 01:01:34.144806 | orchestrator | skipping: [testbed-node-0] 2026-01-17 01:01:34.144812 | orchestrator | 2026-01-17 01:01:34.144818 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-01-17 01:01:34.144825 | orchestrator | Saturday 17 January 2026 00:59:57 +0000 (0:00:00.139) 0:00:08.313 ****** 2026-01-17 01:01:34.144831 | orchestrator | skipping: [testbed-node-0] 2026-01-17 01:01:34.144838 | orchestrator | skipping: [testbed-node-1] 2026-01-17 01:01:34.144844 | orchestrator | skipping: [testbed-node-2] 2026-01-17 01:01:34.144850 | orchestrator | 2026-01-17 01:01:34.144856 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-01-17 01:01:34.144863 | orchestrator | Saturday 17 January 2026 00:59:57 +0000 (0:00:00.339) 0:00:08.652 ****** 2026-01-17 01:01:34.144870 | orchestrator | ok: [testbed-node-0] 2026-01-17 01:01:34.144876 | orchestrator | ok: [testbed-node-1] 2026-01-17 01:01:34.144882 | orchestrator | ok: [testbed-node-2] 2026-01-17 01:01:34.144888 | orchestrator | 2026-01-17 01:01:34.144894 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-01-17 01:01:34.144900 | orchestrator | Saturday 17 January 2026 00:59:57 +0000 (0:00:00.332) 0:00:08.984 ****** 2026-01-17 01:01:34.144905 | orchestrator | skipping: [testbed-node-0] 2026-01-17 01:01:34.144911 | orchestrator | 2026-01-17 01:01:34.144917 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-01-17 01:01:34.144923 | orchestrator | Saturday 17 January 2026 00:59:57 +0000 (0:00:00.142) 0:00:09.127 ****** 2026-01-17 01:01:34.144929 | orchestrator | skipping: [testbed-node-0] 2026-01-17 01:01:34.144936 | orchestrator | skipping: [testbed-node-1] 2026-01-17 01:01:34.144942 | orchestrator | skipping: [testbed-node-2] 2026-01-17 01:01:34.144948 | orchestrator | 2026-01-17 01:01:34.144954 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-01-17 01:01:34.144973 | orchestrator | Saturday 17 January 2026 00:59:58 +0000 (0:00:00.323) 0:00:09.450 ****** 2026-01-17 01:01:34.144979 | orchestrator | ok: [testbed-node-0] 2026-01-17 01:01:34.144985 | orchestrator | ok: [testbed-node-1] 2026-01-17 01:01:34.144990 | orchestrator | ok: [testbed-node-2] 2026-01-17 01:01:34.144996 | orchestrator | 2026-01-17 01:01:34.145008 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-01-17 01:01:34.145014 | orchestrator | Saturday 17 January 2026 00:59:58 +0000 (0:00:00.523) 0:00:09.974 ****** 2026-01-17 01:01:34.145020 | orchestrator | skipping: [testbed-node-0] 2026-01-17 01:01:34.145026 | orchestrator | 2026-01-17 01:01:34.145031 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-01-17 01:01:34.145053 | orchestrator | Saturday 17 January 2026 00:59:58 +0000 (0:00:00.129) 0:00:10.103 ****** 2026-01-17 01:01:34.145059 | orchestrator | skipping: [testbed-node-0] 2026-01-17 01:01:34.145065 | orchestrator | skipping: [testbed-node-1] 2026-01-17 01:01:34.145071 | orchestrator | skipping: [testbed-node-2] 2026-01-17 01:01:34.145077 | orchestrator | 2026-01-17 01:01:34.145082 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-01-17 01:01:34.145088 | orchestrator | Saturday 17 January 2026 00:59:59 +0000 (0:00:00.286) 0:00:10.390 ****** 2026-01-17 01:01:34.145095 | orchestrator | ok: [testbed-node-0] 2026-01-17 01:01:34.145101 | orchestrator | ok: [testbed-node-1] 2026-01-17 01:01:34.145107 | orchestrator | ok: [testbed-node-2] 2026-01-17 01:01:34.145112 | orchestrator | 2026-01-17 01:01:34.145118 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-01-17 01:01:34.145124 | orchestrator | Saturday 17 January 2026 00:59:59 +0000 (0:00:00.367) 0:00:10.758 ****** 2026-01-17 01:01:34.145130 | orchestrator | skipping: [testbed-node-0] 2026-01-17 01:01:34.145136 | orchestrator | 2026-01-17 01:01:34.145142 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-01-17 01:01:34.145148 | orchestrator | Saturday 17 January 2026 00:59:59 +0000 (0:00:00.127) 0:00:10.885 ****** 2026-01-17 01:01:34.145153 | orchestrator | skipping: [testbed-node-0] 2026-01-17 01:01:34.145159 | orchestrator | skipping: [testbed-node-1] 2026-01-17 01:01:34.145165 | orchestrator | skipping: [testbed-node-2] 2026-01-17 01:01:34.145171 | orchestrator | 2026-01-17 01:01:34.145176 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-01-17 01:01:34.145182 | orchestrator | Saturday 17 January 2026 00:59:59 +0000 (0:00:00.319) 0:00:11.204 ****** 2026-01-17 01:01:34.145189 | orchestrator | ok: [testbed-node-0] 2026-01-17 01:01:34.145194 | orchestrator | ok: [testbed-node-1] 2026-01-17 01:01:34.145200 | orchestrator | ok: [testbed-node-2] 2026-01-17 01:01:34.145206 | orchestrator | 2026-01-17 01:01:34.145212 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-01-17 01:01:34.145218 | orchestrator | Saturday 17 January 2026 01:00:00 +0000 (0:00:00.573) 0:00:11.778 ****** 2026-01-17 01:01:34.145225 | orchestrator | skipping: [testbed-node-0] 2026-01-17 01:01:34.145231 | orchestrator | 2026-01-17 01:01:34.145236 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-01-17 01:01:34.145242 | orchestrator | Saturday 17 January 2026 01:00:00 +0000 (0:00:00.121) 0:00:11.899 ****** 2026-01-17 01:01:34.145248 | orchestrator | skipping: [testbed-node-0] 2026-01-17 01:01:34.145254 | orchestrator | skipping: [testbed-node-1] 2026-01-17 01:01:34.145260 | orchestrator | skipping: [testbed-node-2] 2026-01-17 01:01:34.145265 | orchestrator | 2026-01-17 01:01:34.145271 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-01-17 01:01:34.145277 | orchestrator | Saturday 17 January 2026 01:00:00 +0000 (0:00:00.287) 0:00:12.187 ****** 2026-01-17 01:01:34.145283 | orchestrator | ok: [testbed-node-0] 2026-01-17 01:01:34.145289 | orchestrator | ok: [testbed-node-1] 2026-01-17 01:01:34.145295 | orchestrator | ok: [testbed-node-2] 2026-01-17 01:01:34.145302 | orchestrator | 2026-01-17 01:01:34.145308 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-01-17 01:01:34.145314 | orchestrator | Saturday 17 January 2026 01:00:01 +0000 (0:00:00.307) 0:00:12.494 ****** 2026-01-17 01:01:34.145320 | orchestrator | skipping: [testbed-node-0] 2026-01-17 01:01:34.145326 | orchestrator | 2026-01-17 01:01:34.145332 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-01-17 01:01:34.145338 | orchestrator | Saturday 17 January 2026 01:00:01 +0000 (0:00:00.144) 0:00:12.639 ****** 2026-01-17 01:01:34.145344 | orchestrator | skipping: [testbed-node-0] 2026-01-17 01:01:34.145349 | orchestrator | skipping: [testbed-node-1] 2026-01-17 01:01:34.145355 | orchestrator | skipping: [testbed-node-2] 2026-01-17 01:01:34.145361 | orchestrator | 2026-01-17 01:01:34.145366 | orchestrator | TASK [horizon : Copying over config.json files for services] ******************* 2026-01-17 01:01:34.145379 | orchestrator | Saturday 17 January 2026 01:00:01 +0000 (0:00:00.506) 0:00:13.145 ****** 2026-01-17 01:01:34.145385 | orchestrator | changed: [testbed-node-1] 2026-01-17 01:01:34.145391 | orchestrator | changed: [testbed-node-2] 2026-01-17 01:01:34.145397 | orchestrator | changed: [testbed-node-0] 2026-01-17 01:01:34.145403 | orchestrator | 2026-01-17 01:01:34.145409 | orchestrator | TASK [horizon : Copying over horizon.conf] ************************************* 2026-01-17 01:01:34.145414 | orchestrator | Saturday 17 January 2026 01:00:03 +0000 (0:00:01.827) 0:00:14.973 ****** 2026-01-17 01:01:34.145594 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2026-01-17 01:01:34.145602 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2026-01-17 01:01:34.145608 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2026-01-17 01:01:34.145614 | orchestrator | 2026-01-17 01:01:34.145621 | orchestrator | TASK [horizon : Copying over kolla-settings.py] ******************************** 2026-01-17 01:01:34.145627 | orchestrator | Saturday 17 January 2026 01:00:05 +0000 (0:00:02.057) 0:00:17.030 ****** 2026-01-17 01:01:34.145633 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2026-01-17 01:01:34.145640 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2026-01-17 01:01:34.145646 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2026-01-17 01:01:34.145652 | orchestrator | 2026-01-17 01:01:34.145659 | orchestrator | TASK [horizon : Copying over custom-settings.py] ******************************* 2026-01-17 01:01:34.145674 | orchestrator | Saturday 17 January 2026 01:00:07 +0000 (0:00:02.195) 0:00:19.226 ****** 2026-01-17 01:01:34.145688 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2026-01-17 01:01:34.145723 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2026-01-17 01:01:34.145731 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2026-01-17 01:01:34.145737 | orchestrator | 2026-01-17 01:01:34.145743 | orchestrator | TASK [horizon : Copying over existing policy file] ***************************** 2026-01-17 01:01:34.145749 | orchestrator | Saturday 17 January 2026 01:00:09 +0000 (0:00:01.895) 0:00:21.121 ****** 2026-01-17 01:01:34.145756 | orchestrator | skipping: [testbed-node-0] 2026-01-17 01:01:34.145762 | orchestrator | skipping: [testbed-node-1] 2026-01-17 01:01:34.145768 | orchestrator | skipping: [testbed-node-2] 2026-01-17 01:01:34.145774 | orchestrator | 2026-01-17 01:01:34.145781 | orchestrator | TASK [horizon : Copying over custom themes] ************************************ 2026-01-17 01:01:34.145787 | orchestrator | Saturday 17 January 2026 01:00:10 +0000 (0:00:00.317) 0:00:21.438 ****** 2026-01-17 01:01:34.145793 | orchestrator | skipping: [testbed-node-0] 2026-01-17 01:01:34.145800 | orchestrator | skipping: [testbed-node-1] 2026-01-17 01:01:34.145806 | orchestrator | skipping: [testbed-node-2] 2026-01-17 01:01:34.145813 | orchestrator | 2026-01-17 01:01:34.145819 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-01-17 01:01:34.145825 | orchestrator | Saturday 17 January 2026 01:00:10 +0000 (0:00:00.327) 0:00:21.765 ****** 2026-01-17 01:01:34.145833 | orchestrator | included: /ansible/roles/horizon/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-17 01:01:34.145837 | orchestrator | 2026-01-17 01:01:34.145841 | orchestrator | TASK [service-cert-copy : horizon | Copying over extra CA certificates] ******** 2026-01-17 01:01:34.145844 | orchestrator | Saturday 17 January 2026 01:00:11 +0000 (0:00:00.837) 0:00:22.603 ****** 2026-01-17 01:01:34.145851 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-01-17 01:01:34.145879 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-01-17 01:01:34.145893 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-01-17 01:01:34.145901 | orchestrator | 2026-01-17 01:01:34.145907 | orchestrator | TASK [service-cert-copy : horizon | Copying over backend internal TLS certificate] *** 2026-01-17 01:01:34.145914 | orchestrator | Saturday 17 January 2026 01:00:13 +0000 (0:00:01.864) 0:00:24.468 ****** 2026-01-17 01:01:34.145930 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-01-17 01:01:34.145942 | orchestrator | skipping: [testbed-node-0] 2026-01-17 01:01:34.145951 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-01-17 01:01:34.145958 | orchestrator | skipping: [testbed-node-1] 2026-01-17 01:01:34.145968 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-01-17 01:01:34.145983 | orchestrator | skipping: [testbed-node-2] 2026-01-17 01:01:34.145991 | orchestrator | 2026-01-17 01:01:34.145997 | orchestrator | TASK [service-cert-copy : horizon | Copying over backend internal TLS key] ***** 2026-01-17 01:01:34.146003 | orchestrator | Saturday 17 January 2026 01:00:13 +0000 (0:00:00.708) 0:00:25.177 ****** 2026-01-17 01:01:34.146113 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-01-17 01:01:34.146126 | orchestrator | skipping: [testbed-node-0] 2026-01-17 01:01:34.146134 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-01-17 01:01:34.146146 | orchestrator | skipping: [testbed-node-1] 2026-01-17 01:01:34.146160 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-01-17 01:01:34.146165 | orchestrator | skipping: [testbed-node-2] 2026-01-17 01:01:34.146169 | orchestrator | 2026-01-17 01:01:34.146174 | orchestrator | TASK [horizon : Deploy horizon container] ************************************** 2026-01-17 01:01:34.146178 | orchestrator | Saturday 17 January 2026 01:00:14 +0000 (0:00:00.795) 0:00:25.972 ****** 2026-01-17 01:01:34.146187 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-01-17 01:01:34.146200 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-01-17 01:01:34.146212 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-01-17 01:01:34.146217 | orchestrator | 2026-01-17 01:01:34.146223 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-01-17 01:01:34.146232 | orchestrator | Saturday 17 January 2026 01:00:16 +0000 (0:00:01.666) 0:00:27.639 ****** 2026-01-17 01:01:34.146239 | orchestrator | skipping: [testbed-node-0] 2026-01-17 01:01:34.146246 | orchestrator | skipping: [testbed-node-1] 2026-01-17 01:01:34.146252 | orchestrator | skipping: [testbed-node-2] 2026-01-17 01:01:34.146257 | orchestrator | 2026-01-17 01:01:34.146263 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-01-17 01:01:34.146270 | orchestrator | Saturday 17 January 2026 01:00:16 +0000 (0:00:00.363) 0:00:28.002 ****** 2026-01-17 01:01:34.146276 | orchestrator | included: /ansible/roles/horizon/tasks/bootstrap.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-17 01:01:34.146282 | orchestrator | 2026-01-17 01:01:34.146289 | orchestrator | TASK [horizon : Creating Horizon database] ************************************* 2026-01-17 01:01:34.146297 | orchestrator | Saturday 17 January 2026 01:00:17 +0000 (0:00:00.687) 0:00:28.689 ****** 2026-01-17 01:01:34.146302 | orchestrator | changed: [testbed-node-0] 2026-01-17 01:01:34.146306 | orchestrator | 2026-01-17 01:01:34.146310 | orchestrator | TASK [horizon : Creating Horizon database user and setting permissions] ******** 2026-01-17 01:01:34.146318 | orchestrator | Saturday 17 January 2026 01:00:20 +0000 (0:00:02.758) 0:00:31.447 ****** 2026-01-17 01:01:34.146322 | orchestrator | changed: [testbed-node-0] 2026-01-17 01:01:34.146326 | orchestrator | 2026-01-17 01:01:34.146331 | orchestrator | TASK [horizon : Running Horizon bootstrap container] *************************** 2026-01-17 01:01:34.146339 | orchestrator | Saturday 17 January 2026 01:00:23 +0000 (0:00:02.995) 0:00:34.443 ****** 2026-01-17 01:01:34.146344 | orchestrator | changed: [testbed-node-0] 2026-01-17 01:01:34.146348 | orchestrator | 2026-01-17 01:01:34.146353 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2026-01-17 01:01:34.146357 | orchestrator | Saturday 17 January 2026 01:00:39 +0000 (0:00:16.599) 0:00:51.043 ****** 2026-01-17 01:01:34.146361 | orchestrator | 2026-01-17 01:01:34.146365 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2026-01-17 01:01:34.146369 | orchestrator | Saturday 17 January 2026 01:00:39 +0000 (0:00:00.067) 0:00:51.110 ****** 2026-01-17 01:01:34.146374 | orchestrator | 2026-01-17 01:01:34.146378 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2026-01-17 01:01:34.146383 | orchestrator | Saturday 17 January 2026 01:00:39 +0000 (0:00:00.067) 0:00:51.177 ****** 2026-01-17 01:01:34.146387 | orchestrator | 2026-01-17 01:01:34.146392 | orchestrator | RUNNING HANDLER [horizon : Restart horizon container] ************************** 2026-01-17 01:01:34.146396 | orchestrator | Saturday 17 January 2026 01:00:40 +0000 (0:00:00.069) 0:00:51.247 ****** 2026-01-17 01:01:34.146400 | orchestrator | changed: [testbed-node-0] 2026-01-17 01:01:34.146405 | orchestrator | changed: [testbed-node-2] 2026-01-17 01:01:34.146409 | orchestrator | changed: [testbed-node-1] 2026-01-17 01:01:34.146413 | orchestrator | 2026-01-17 01:01:34.146417 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-17 01:01:34.146421 | orchestrator | testbed-node-0 : ok=37  changed=11  unreachable=0 failed=0 skipped=25  rescued=0 ignored=0 2026-01-17 01:01:34.146427 | orchestrator | testbed-node-1 : ok=34  changed=8  unreachable=0 failed=0 skipped=15  rescued=0 ignored=0 2026-01-17 01:01:34.146430 | orchestrator | testbed-node-2 : ok=34  changed=8  unreachable=0 failed=0 skipped=15  rescued=0 ignored=0 2026-01-17 01:01:34.146434 | orchestrator | 2026-01-17 01:01:34.146438 | orchestrator | 2026-01-17 01:01:34.146441 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-17 01:01:34.146445 | orchestrator | Saturday 17 January 2026 01:01:31 +0000 (0:00:51.746) 0:01:42.993 ****** 2026-01-17 01:01:34.146449 | orchestrator | =============================================================================== 2026-01-17 01:01:34.146453 | orchestrator | horizon : Restart horizon container ------------------------------------ 51.75s 2026-01-17 01:01:34.146457 | orchestrator | horizon : Running Horizon bootstrap container -------------------------- 16.60s 2026-01-17 01:01:34.146460 | orchestrator | horizon : Creating Horizon database user and setting permissions -------- 3.00s 2026-01-17 01:01:34.146464 | orchestrator | horizon : Creating Horizon database ------------------------------------- 2.76s 2026-01-17 01:01:34.146468 | orchestrator | horizon : Copying over kolla-settings.py -------------------------------- 2.20s 2026-01-17 01:01:34.146471 | orchestrator | horizon : Copying over horizon.conf ------------------------------------- 2.06s 2026-01-17 01:01:34.146475 | orchestrator | horizon : Copying over custom-settings.py ------------------------------- 1.90s 2026-01-17 01:01:34.146479 | orchestrator | service-cert-copy : horizon | Copying over extra CA certificates -------- 1.86s 2026-01-17 01:01:34.146482 | orchestrator | horizon : Copying over config.json files for services ------------------- 1.83s 2026-01-17 01:01:34.146486 | orchestrator | horizon : Deploy horizon container -------------------------------------- 1.67s 2026-01-17 01:01:34.146490 | orchestrator | horizon : Ensuring config directories exist ----------------------------- 1.17s 2026-01-17 01:01:34.146494 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.84s 2026-01-17 01:01:34.146497 | orchestrator | service-cert-copy : horizon | Copying over backend internal TLS key ----- 0.80s 2026-01-17 01:01:34.146501 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.78s 2026-01-17 01:01:34.146505 | orchestrator | service-cert-copy : horizon | Copying over backend internal TLS certificate --- 0.71s 2026-01-17 01:01:34.146512 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.69s 2026-01-17 01:01:34.146515 | orchestrator | horizon : Update policy file name --------------------------------------- 0.57s 2026-01-17 01:01:34.146520 | orchestrator | horizon : Update custom policy file name -------------------------------- 0.55s 2026-01-17 01:01:34.146526 | orchestrator | horizon : Update policy file name --------------------------------------- 0.53s 2026-01-17 01:01:34.146532 | orchestrator | horizon : Update policy file name --------------------------------------- 0.52s 2026-01-17 01:01:34.146538 | orchestrator | 2026-01-17 01:01:34 | INFO  | Task 72c21961-be6e-47db-9722-e81aadb3b3af is in state STARTED 2026-01-17 01:01:34.146545 | orchestrator | 2026-01-17 01:01:34 | INFO  | Task 4241e033-f5cb-4e76-9aba-7436e17279a2 is in state STARTED 2026-01-17 01:01:34.146550 | orchestrator | 2026-01-17 01:01:34 | INFO  | Wait 1 second(s) until the next check 2026-01-17 01:01:37.200313 | orchestrator | 2026-01-17 01:01:37 | INFO  | Task 72c21961-be6e-47db-9722-e81aadb3b3af is in state STARTED 2026-01-17 01:01:37.201615 | orchestrator | 2026-01-17 01:01:37 | INFO  | Task 4241e033-f5cb-4e76-9aba-7436e17279a2 is in state STARTED 2026-01-17 01:01:37.201673 | orchestrator | 2026-01-17 01:01:37 | INFO  | Wait 1 second(s) until the next check 2026-01-17 01:01:40.260296 | orchestrator | 2026-01-17 01:01:40 | INFO  | Task 72c21961-be6e-47db-9722-e81aadb3b3af is in state STARTED 2026-01-17 01:01:40.262304 | orchestrator | 2026-01-17 01:01:40 | INFO  | Task 4241e033-f5cb-4e76-9aba-7436e17279a2 is in state STARTED 2026-01-17 01:01:40.262373 | orchestrator | 2026-01-17 01:01:40 | INFO  | Wait 1 second(s) until the next check 2026-01-17 01:01:43.322469 | orchestrator | 2026-01-17 01:01:43 | INFO  | Task 72c21961-be6e-47db-9722-e81aadb3b3af is in state STARTED 2026-01-17 01:01:43.325480 | orchestrator | 2026-01-17 01:01:43 | INFO  | Task 4241e033-f5cb-4e76-9aba-7436e17279a2 is in state STARTED 2026-01-17 01:01:43.325547 | orchestrator | 2026-01-17 01:01:43 | INFO  | Wait 1 second(s) until the next check 2026-01-17 01:01:46.381101 | orchestrator | 2026-01-17 01:01:46 | INFO  | Task 72c21961-be6e-47db-9722-e81aadb3b3af is in state STARTED 2026-01-17 01:01:46.385394 | orchestrator | 2026-01-17 01:01:46 | INFO  | Task 4241e033-f5cb-4e76-9aba-7436e17279a2 is in state STARTED 2026-01-17 01:01:46.385477 | orchestrator | 2026-01-17 01:01:46 | INFO  | Wait 1 second(s) until the next check 2026-01-17 01:01:49.436391 | orchestrator | 2026-01-17 01:01:49 | INFO  | Task 72c21961-be6e-47db-9722-e81aadb3b3af is in state STARTED 2026-01-17 01:01:49.437575 | orchestrator | 2026-01-17 01:01:49 | INFO  | Task 4241e033-f5cb-4e76-9aba-7436e17279a2 is in state STARTED 2026-01-17 01:01:49.437612 | orchestrator | 2026-01-17 01:01:49 | INFO  | Wait 1 second(s) until the next check 2026-01-17 01:01:52.488609 | orchestrator | 2026-01-17 01:01:52 | INFO  | Task 72c21961-be6e-47db-9722-e81aadb3b3af is in state STARTED 2026-01-17 01:01:52.489991 | orchestrator | 2026-01-17 01:01:52 | INFO  | Task 4241e033-f5cb-4e76-9aba-7436e17279a2 is in state STARTED 2026-01-17 01:01:52.490071 | orchestrator | 2026-01-17 01:01:52 | INFO  | Wait 1 second(s) until the next check 2026-01-17 01:01:55.540522 | orchestrator | 2026-01-17 01:01:55 | INFO  | Task 72c21961-be6e-47db-9722-e81aadb3b3af is in state STARTED 2026-01-17 01:01:55.541019 | orchestrator | 2026-01-17 01:01:55 | INFO  | Task 4241e033-f5cb-4e76-9aba-7436e17279a2 is in state SUCCESS 2026-01-17 01:01:55.541053 | orchestrator | 2026-01-17 01:01:55 | INFO  | Wait 1 second(s) until the next check 2026-01-17 01:01:58.591096 | orchestrator | 2026-01-17 01:01:58 | INFO  | Task 805bef93-6354-4651-aafc-bd9af69e32d1 is in state STARTED 2026-01-17 01:01:58.593303 | orchestrator | 2026-01-17 01:01:58 | INFO  | Task 72c21961-be6e-47db-9722-e81aadb3b3af is in state STARTED 2026-01-17 01:01:58.593496 | orchestrator | 2026-01-17 01:01:58 | INFO  | Wait 1 second(s) until the next check 2026-01-17 01:02:01.640812 | orchestrator | 2026-01-17 01:02:01 | INFO  | Task 805bef93-6354-4651-aafc-bd9af69e32d1 is in state STARTED 2026-01-17 01:02:01.642781 | orchestrator | 2026-01-17 01:02:01 | INFO  | Task 72c21961-be6e-47db-9722-e81aadb3b3af is in state STARTED 2026-01-17 01:02:01.642847 | orchestrator | 2026-01-17 01:02:01 | INFO  | Wait 1 second(s) until the next check 2026-01-17 01:02:04.687883 | orchestrator | 2026-01-17 01:02:04 | INFO  | Task 805bef93-6354-4651-aafc-bd9af69e32d1 is in state STARTED 2026-01-17 01:02:04.691952 | orchestrator | 2026-01-17 01:02:04 | INFO  | Task 72c21961-be6e-47db-9722-e81aadb3b3af is in state STARTED 2026-01-17 01:02:04.692016 | orchestrator | 2026-01-17 01:02:04 | INFO  | Wait 1 second(s) until the next check 2026-01-17 01:02:07.741222 | orchestrator | 2026-01-17 01:02:07 | INFO  | Task 805bef93-6354-4651-aafc-bd9af69e32d1 is in state STARTED 2026-01-17 01:02:07.742810 | orchestrator | 2026-01-17 01:02:07 | INFO  | Task 72c21961-be6e-47db-9722-e81aadb3b3af is in state STARTED 2026-01-17 01:02:07.742856 | orchestrator | 2026-01-17 01:02:07 | INFO  | Wait 1 second(s) until the next check 2026-01-17 01:02:10.786426 | orchestrator | 2026-01-17 01:02:10 | INFO  | Task 805bef93-6354-4651-aafc-bd9af69e32d1 is in state STARTED 2026-01-17 01:02:10.787078 | orchestrator | 2026-01-17 01:02:10 | INFO  | Task 72c21961-be6e-47db-9722-e81aadb3b3af is in state STARTED 2026-01-17 01:02:10.787107 | orchestrator | 2026-01-17 01:02:10 | INFO  | Wait 1 second(s) until the next check 2026-01-17 01:02:13.828700 | orchestrator | 2026-01-17 01:02:13 | INFO  | Task 805bef93-6354-4651-aafc-bd9af69e32d1 is in state STARTED 2026-01-17 01:02:13.830369 | orchestrator | 2026-01-17 01:02:13 | INFO  | Task 72c21961-be6e-47db-9722-e81aadb3b3af is in state STARTED 2026-01-17 01:02:13.830753 | orchestrator | 2026-01-17 01:02:13 | INFO  | Wait 1 second(s) until the next check 2026-01-17 01:02:16.882588 | orchestrator | 2026-01-17 01:02:16 | INFO  | Task 805bef93-6354-4651-aafc-bd9af69e32d1 is in state STARTED 2026-01-17 01:02:16.884480 | orchestrator | 2026-01-17 01:02:16 | INFO  | Task 72c21961-be6e-47db-9722-e81aadb3b3af is in state STARTED 2026-01-17 01:02:16.884549 | orchestrator | 2026-01-17 01:02:16 | INFO  | Wait 1 second(s) until the next check 2026-01-17 01:02:19.936813 | orchestrator | 2026-01-17 01:02:19 | INFO  | Task 805bef93-6354-4651-aafc-bd9af69e32d1 is in state STARTED 2026-01-17 01:02:19.939884 | orchestrator | 2026-01-17 01:02:19 | INFO  | Task 72c21961-be6e-47db-9722-e81aadb3b3af is in state STARTED 2026-01-17 01:02:19.939925 | orchestrator | 2026-01-17 01:02:19 | INFO  | Wait 1 second(s) until the next check 2026-01-17 01:02:22.991482 | orchestrator | 2026-01-17 01:02:22 | INFO  | Task 805bef93-6354-4651-aafc-bd9af69e32d1 is in state STARTED 2026-01-17 01:02:22.994122 | orchestrator | 2026-01-17 01:02:22 | INFO  | Task 72c21961-be6e-47db-9722-e81aadb3b3af is in state STARTED 2026-01-17 01:02:22.994237 | orchestrator | 2026-01-17 01:02:22 | INFO  | Wait 1 second(s) until the next check 2026-01-17 01:02:26.041425 | orchestrator | 2026-01-17 01:02:26 | INFO  | Task 805bef93-6354-4651-aafc-bd9af69e32d1 is in state STARTED 2026-01-17 01:02:26.043003 | orchestrator | 2026-01-17 01:02:26 | INFO  | Task 72c21961-be6e-47db-9722-e81aadb3b3af is in state STARTED 2026-01-17 01:02:26.043280 | orchestrator | 2026-01-17 01:02:26 | INFO  | Wait 1 second(s) until the next check 2026-01-17 01:02:29.102859 | orchestrator | 2026-01-17 01:02:29 | INFO  | Task 805bef93-6354-4651-aafc-bd9af69e32d1 is in state STARTED 2026-01-17 01:02:29.105668 | orchestrator | 2026-01-17 01:02:29 | INFO  | Task 72c21961-be6e-47db-9722-e81aadb3b3af is in state STARTED 2026-01-17 01:02:29.105754 | orchestrator | 2026-01-17 01:02:29 | INFO  | Wait 1 second(s) until the next check 2026-01-17 01:02:32.158223 | orchestrator | 2026-01-17 01:02:32 | INFO  | Task 805bef93-6354-4651-aafc-bd9af69e32d1 is in state STARTED 2026-01-17 01:02:32.161266 | orchestrator | 2026-01-17 01:02:32 | INFO  | Task 72c21961-be6e-47db-9722-e81aadb3b3af is in state STARTED 2026-01-17 01:02:32.161343 | orchestrator | 2026-01-17 01:02:32 | INFO  | Wait 1 second(s) until the next check 2026-01-17 01:02:35.210219 | orchestrator | 2026-01-17 01:02:35 | INFO  | Task 805bef93-6354-4651-aafc-bd9af69e32d1 is in state STARTED 2026-01-17 01:02:35.210965 | orchestrator | 2026-01-17 01:02:35 | INFO  | Task 72c21961-be6e-47db-9722-e81aadb3b3af is in state STARTED 2026-01-17 01:02:35.210992 | orchestrator | 2026-01-17 01:02:35 | INFO  | Wait 1 second(s) until the next check 2026-01-17 01:02:38.258558 | orchestrator | 2026-01-17 01:02:38 | INFO  | Task 805bef93-6354-4651-aafc-bd9af69e32d1 is in state STARTED 2026-01-17 01:02:38.260964 | orchestrator | 2026-01-17 01:02:38 | INFO  | Task 72c21961-be6e-47db-9722-e81aadb3b3af is in state STARTED 2026-01-17 01:02:38.261022 | orchestrator | 2026-01-17 01:02:38 | INFO  | Wait 1 second(s) until the next check 2026-01-17 01:02:41.311239 | orchestrator | 2026-01-17 01:02:41 | INFO  | Task 805bef93-6354-4651-aafc-bd9af69e32d1 is in state STARTED 2026-01-17 01:02:41.313022 | orchestrator | 2026-01-17 01:02:41 | INFO  | Task 72c21961-be6e-47db-9722-e81aadb3b3af is in state STARTED 2026-01-17 01:02:41.313067 | orchestrator | 2026-01-17 01:02:41 | INFO  | Wait 1 second(s) until the next check 2026-01-17 01:02:44.353532 | orchestrator | 2026-01-17 01:02:44 | INFO  | Task f8c4b6e6-3f8b-436e-a1c0-08373c0c22cd is in state STARTED 2026-01-17 01:02:44.354655 | orchestrator | 2026-01-17 01:02:44 | INFO  | Task 81129f70-4b45-4ffd-ac06-457ca49d1734 is in state STARTED 2026-01-17 01:02:44.357952 | orchestrator | 2026-01-17 01:02:44 | INFO  | Task 805bef93-6354-4651-aafc-bd9af69e32d1 is in state STARTED 2026-01-17 01:02:44.358615 | orchestrator | 2026-01-17 01:02:44 | INFO  | Task 7c0cfeff-5f53-4e00-8a2f-3700890c45c2 is in state STARTED 2026-01-17 01:02:44.360891 | orchestrator | 2026-01-17 01:02:44 | INFO  | Task 72c21961-be6e-47db-9722-e81aadb3b3af is in state SUCCESS 2026-01-17 01:02:44.361187 | orchestrator | 2026-01-17 01:02:44.361214 | orchestrator | 2026-01-17 01:02:44.361223 | orchestrator | PLAY [Copy ceph keys to the configuration repository] ************************** 2026-01-17 01:02:44.361234 | orchestrator | 2026-01-17 01:02:44.361244 | orchestrator | TASK [Check if ceph keys exist] ************************************************ 2026-01-17 01:02:44.361254 | orchestrator | Saturday 17 January 2026 01:01:21 +0000 (0:00:00.159) 0:00:00.159 ****** 2026-01-17 01:02:44.361263 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.admin.keyring) 2026-01-17 01:02:44.361273 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-01-17 01:02:44.361360 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-01-17 01:02:44.361373 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder-backup.keyring) 2026-01-17 01:02:44.361381 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-01-17 01:02:44.361390 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.nova.keyring) 2026-01-17 01:02:44.361423 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.glance.keyring) 2026-01-17 01:02:44.361433 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.gnocchi.keyring) 2026-01-17 01:02:44.361440 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.manila.keyring) 2026-01-17 01:02:44.361448 | orchestrator | 2026-01-17 01:02:44.361455 | orchestrator | TASK [Fetch all ceph keys] ***************************************************** 2026-01-17 01:02:44.361463 | orchestrator | Saturday 17 January 2026 01:01:25 +0000 (0:00:04.677) 0:00:04.837 ****** 2026-01-17 01:02:44.361472 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.admin.keyring) 2026-01-17 01:02:44.361481 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-01-17 01:02:44.361489 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-01-17 01:02:44.361498 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder-backup.keyring) 2026-01-17 01:02:44.361507 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-01-17 01:02:44.361514 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.nova.keyring) 2026-01-17 01:02:44.361523 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.glance.keyring) 2026-01-17 01:02:44.361530 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.gnocchi.keyring) 2026-01-17 01:02:44.361538 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.manila.keyring) 2026-01-17 01:02:44.361546 | orchestrator | 2026-01-17 01:02:44.361554 | orchestrator | TASK [Create share directory] ************************************************** 2026-01-17 01:02:44.361562 | orchestrator | Saturday 17 January 2026 01:01:29 +0000 (0:00:04.174) 0:00:09.011 ****** 2026-01-17 01:02:44.361755 | orchestrator | changed: [testbed-manager -> localhost] 2026-01-17 01:02:44.361774 | orchestrator | 2026-01-17 01:02:44.361782 | orchestrator | TASK [Write ceph keys to the share directory] ********************************** 2026-01-17 01:02:44.361797 | orchestrator | Saturday 17 January 2026 01:01:30 +0000 (0:00:01.091) 0:00:10.103 ****** 2026-01-17 01:02:44.362244 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.admin.keyring) 2026-01-17 01:02:44.362272 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2026-01-17 01:02:44.362281 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2026-01-17 01:02:44.362291 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.cinder-backup.keyring) 2026-01-17 01:02:44.362301 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2026-01-17 01:02:44.362311 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.nova.keyring) 2026-01-17 01:02:44.362321 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.glance.keyring) 2026-01-17 01:02:44.362330 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.gnocchi.keyring) 2026-01-17 01:02:44.362340 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.manila.keyring) 2026-01-17 01:02:44.362349 | orchestrator | 2026-01-17 01:02:44.362359 | orchestrator | TASK [Check if target directories exist] *************************************** 2026-01-17 01:02:44.362369 | orchestrator | Saturday 17 January 2026 01:01:44 +0000 (0:00:13.774) 0:00:23.878 ****** 2026-01-17 01:02:44.362378 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/infrastructure/files/ceph) 2026-01-17 01:02:44.362388 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/cinder/cinder-volume) 2026-01-17 01:02:44.362397 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/cinder/cinder-backup) 2026-01-17 01:02:44.362421 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/cinder/cinder-backup) 2026-01-17 01:02:44.362900 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/nova) 2026-01-17 01:02:44.362934 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/nova) 2026-01-17 01:02:44.362943 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/glance) 2026-01-17 01:02:44.362952 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/gnocchi) 2026-01-17 01:02:44.362959 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/manila) 2026-01-17 01:02:44.362968 | orchestrator | 2026-01-17 01:02:44.362988 | orchestrator | TASK [Write ceph keys to the configuration directory] ************************** 2026-01-17 01:02:44.362997 | orchestrator | Saturday 17 January 2026 01:01:48 +0000 (0:00:03.274) 0:00:27.152 ****** 2026-01-17 01:02:44.363006 | orchestrator | changed: [testbed-manager] => (item=ceph.client.admin.keyring) 2026-01-17 01:02:44.363014 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2026-01-17 01:02:44.363022 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2026-01-17 01:02:44.363030 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder-backup.keyring) 2026-01-17 01:02:44.363039 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2026-01-17 01:02:44.363047 | orchestrator | changed: [testbed-manager] => (item=ceph.client.nova.keyring) 2026-01-17 01:02:44.363055 | orchestrator | changed: [testbed-manager] => (item=ceph.client.glance.keyring) 2026-01-17 01:02:44.363063 | orchestrator | changed: [testbed-manager] => (item=ceph.client.gnocchi.keyring) 2026-01-17 01:02:44.363071 | orchestrator | changed: [testbed-manager] => (item=ceph.client.manila.keyring) 2026-01-17 01:02:44.363079 | orchestrator | 2026-01-17 01:02:44.363087 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-17 01:02:44.363095 | orchestrator | testbed-manager : ok=6  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-17 01:02:44.363105 | orchestrator | 2026-01-17 01:02:44.363113 | orchestrator | 2026-01-17 01:02:44.363121 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-17 01:02:44.363130 | orchestrator | Saturday 17 January 2026 01:01:55 +0000 (0:00:07.068) 0:00:34.220 ****** 2026-01-17 01:02:44.363138 | orchestrator | =============================================================================== 2026-01-17 01:02:44.363146 | orchestrator | Write ceph keys to the share directory --------------------------------- 13.77s 2026-01-17 01:02:44.363155 | orchestrator | Write ceph keys to the configuration directory -------------------------- 7.07s 2026-01-17 01:02:44.363163 | orchestrator | Check if ceph keys exist ------------------------------------------------ 4.68s 2026-01-17 01:02:44.363170 | orchestrator | Fetch all ceph keys ----------------------------------------------------- 4.17s 2026-01-17 01:02:44.363179 | orchestrator | Check if target directories exist --------------------------------------- 3.27s 2026-01-17 01:02:44.363187 | orchestrator | Create share directory -------------------------------------------------- 1.09s 2026-01-17 01:02:44.363195 | orchestrator | 2026-01-17 01:02:44.363202 | orchestrator | 2026-01-17 01:02:44.363211 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-17 01:02:44.363221 | orchestrator | 2026-01-17 01:02:44.363229 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-01-17 01:02:44.363238 | orchestrator | Saturday 17 January 2026 00:59:49 +0000 (0:00:00.261) 0:00:00.261 ****** 2026-01-17 01:02:44.363247 | orchestrator | ok: [testbed-node-0] 2026-01-17 01:02:44.363256 | orchestrator | ok: [testbed-node-1] 2026-01-17 01:02:44.363265 | orchestrator | ok: [testbed-node-2] 2026-01-17 01:02:44.363273 | orchestrator | 2026-01-17 01:02:44.363281 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-17 01:02:44.363299 | orchestrator | Saturday 17 January 2026 00:59:49 +0000 (0:00:00.313) 0:00:00.574 ****** 2026-01-17 01:02:44.363308 | orchestrator | ok: [testbed-node-0] => (item=enable_keystone_True) 2026-01-17 01:02:44.363317 | orchestrator | ok: [testbed-node-1] => (item=enable_keystone_True) 2026-01-17 01:02:44.363325 | orchestrator | ok: [testbed-node-2] => (item=enable_keystone_True) 2026-01-17 01:02:44.363332 | orchestrator | 2026-01-17 01:02:44.363339 | orchestrator | PLAY [Apply role keystone] ***************************************************** 2026-01-17 01:02:44.363347 | orchestrator | 2026-01-17 01:02:44.363355 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-01-17 01:02:44.363363 | orchestrator | Saturday 17 January 2026 00:59:49 +0000 (0:00:00.431) 0:00:01.006 ****** 2026-01-17 01:02:44.363372 | orchestrator | included: /ansible/roles/keystone/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-17 01:02:44.363380 | orchestrator | 2026-01-17 01:02:44.363389 | orchestrator | TASK [keystone : Ensuring config directories exist] **************************** 2026-01-17 01:02:44.363397 | orchestrator | Saturday 17 January 2026 00:59:50 +0000 (0:00:00.580) 0:00:01.587 ****** 2026-01-17 01:02:44.363449 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-01-17 01:02:44.363467 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-01-17 01:02:44.363478 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-01-17 01:02:44.363496 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-01-17 01:02:44.363507 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-01-17 01:02:44.363538 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-01-17 01:02:44.363548 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-01-17 01:02:44.363557 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-01-17 01:02:44.363626 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-01-17 01:02:44.363646 | orchestrator | 2026-01-17 01:02:44.363656 | orchestrator | TASK [keystone : Check if policies shall be overwritten] *********************** 2026-01-17 01:02:44.363664 | orchestrator | Saturday 17 January 2026 00:59:52 +0000 (0:00:01.846) 0:00:03.434 ****** 2026-01-17 01:02:44.363673 | orchestrator | skipping: [testbed-node-0] 2026-01-17 01:02:44.363682 | orchestrator | 2026-01-17 01:02:44.363691 | orchestrator | TASK [keystone : Set keystone policy file] ************************************* 2026-01-17 01:02:44.363699 | orchestrator | Saturday 17 January 2026 00:59:52 +0000 (0:00:00.147) 0:00:03.582 ****** 2026-01-17 01:02:44.363708 | orchestrator | skipping: [testbed-node-0] 2026-01-17 01:02:44.363716 | orchestrator | skipping: [testbed-node-1] 2026-01-17 01:02:44.363725 | orchestrator | skipping: [testbed-node-2] 2026-01-17 01:02:44.363733 | orchestrator | 2026-01-17 01:02:44.363741 | orchestrator | TASK [keystone : Check if Keystone domain-specific config is supplied] ********* 2026-01-17 01:02:44.363750 | orchestrator | Saturday 17 January 2026 00:59:52 +0000 (0:00:00.497) 0:00:04.079 ****** 2026-01-17 01:02:44.363759 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-01-17 01:02:44.363769 | orchestrator | 2026-01-17 01:02:44.363777 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-01-17 01:02:44.363785 | orchestrator | Saturday 17 January 2026 00:59:53 +0000 (0:00:00.828) 0:00:04.908 ****** 2026-01-17 01:02:44.363794 | orchestrator | included: /ansible/roles/keystone/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-17 01:02:44.363802 | orchestrator | 2026-01-17 01:02:44.363809 | orchestrator | TASK [service-cert-copy : keystone | Copying over extra CA certificates] ******* 2026-01-17 01:02:44.363816 | orchestrator | Saturday 17 January 2026 00:59:54 +0000 (0:00:00.533) 0:00:05.442 ****** 2026-01-17 01:02:44.363831 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-01-17 01:02:44.363846 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-01-17 01:02:44.363855 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-01-17 01:02:44.363870 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-01-17 01:02:44.363879 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-01-17 01:02:44.363888 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-01-17 01:02:44.363907 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-01-17 01:02:44.363921 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-01-17 01:02:44.363928 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-01-17 01:02:44.363942 | orchestrator | 2026-01-17 01:02:44.363950 | orchestrator | TASK [service-cert-copy : keystone | Copying over backend internal TLS certificate] *** 2026-01-17 01:02:44.363957 | orchestrator | Saturday 17 January 2026 00:59:57 +0000 (0:00:03.524) 0:00:08.966 ****** 2026-01-17 01:02:44.363966 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-01-17 01:02:44.363975 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-01-17 01:02:44.363983 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-01-17 01:02:44.363992 | orchestrator | skipping: [testbed-node-0] 2026-01-17 01:02:44.364012 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-01-17 01:02:44.364032 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-01-17 01:02:44.364041 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-01-17 01:02:44.364049 | orchestrator | skipping: [testbed-node-1] 2026-01-17 01:02:44.364058 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-01-17 01:02:44.364068 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-01-17 01:02:44.364083 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-01-17 01:02:44.364091 | orchestrator | skipping: [testbed-node-2] 2026-01-17 01:02:44.364101 | orchestrator | 2026-01-17 01:02:44.364114 | orchestrator | TASK [service-cert-copy : keystone | Copying over backend internal TLS key] **** 2026-01-17 01:02:44.364123 | orchestrator | Saturday 17 January 2026 00:59:58 +0000 (0:00:00.815) 0:00:09.782 ****** 2026-01-17 01:02:44.364138 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-01-17 01:02:44.364147 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-01-17 01:02:44.364156 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-01-17 01:02:44.364164 | orchestrator | skipping: [testbed-node-0] 2026-01-17 01:02:44.364172 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-01-17 01:02:44.364187 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-01-17 01:02:44.364200 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-01-17 01:02:44.364216 | orchestrator | skipping: [testbed-node-1] 2026-01-17 01:02:44.364225 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-01-17 01:02:44.364236 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-01-17 01:02:44.364244 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-01-17 01:02:44.364253 | orchestrator | skipping: [testbed-node-2] 2026-01-17 01:02:44.364260 | orchestrator | 2026-01-17 01:02:44.364269 | orchestrator | TASK [keystone : Copying over config.json files for services] ****************** 2026-01-17 01:02:44.364277 | orchestrator | Saturday 17 January 2026 00:59:59 +0000 (0:00:00.770) 0:00:10.553 ****** 2026-01-17 01:02:44.364291 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-01-17 01:02:44.364310 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-01-17 01:02:44.364320 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-01-17 01:02:44.364329 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-01-17 01:02:44.364337 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-01-17 01:02:44.364351 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-01-17 01:02:44.364369 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-01-17 01:02:44.364377 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-01-17 01:02:44.364387 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-01-17 01:02:44.364396 | orchestrator | 2026-01-17 01:02:44.364405 | orchestrator | TASK [keystone : Copying over keystone.conf] *********************************** 2026-01-17 01:02:44.364415 | orchestrator | Saturday 17 January 2026 01:00:02 +0000 (0:00:03.590) 0:00:14.143 ****** 2026-01-17 01:02:44.364424 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-01-17 01:02:44.364434 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-01-17 01:02:44.364453 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-01-17 01:02:44.364470 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-01-17 01:02:44.364480 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-01-17 01:02:44.364489 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-01-17 01:02:44.364498 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-01-17 01:02:44.364511 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-01-17 01:02:44.364529 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-01-17 01:02:44.364538 | orchestrator | 2026-01-17 01:02:44.364546 | orchestrator | TASK [keystone : Copying keystone-startup script for keystone] ***************** 2026-01-17 01:02:44.364554 | orchestrator | Saturday 17 January 2026 01:00:08 +0000 (0:00:05.660) 0:00:19.803 ****** 2026-01-17 01:02:44.364562 | orchestrator | changed: [testbed-node-0] 2026-01-17 01:02:44.364570 | orchestrator | changed: [testbed-node-1] 2026-01-17 01:02:44.364579 | orchestrator | changed: [testbed-node-2] 2026-01-17 01:02:44.364689 | orchestrator | 2026-01-17 01:02:44.364700 | orchestrator | TASK [keystone : Create Keystone domain-specific config directory] ************* 2026-01-17 01:02:44.364708 | orchestrator | Saturday 17 January 2026 01:00:10 +0000 (0:00:01.447) 0:00:21.251 ****** 2026-01-17 01:02:44.364716 | orchestrator | skipping: [testbed-node-0] 2026-01-17 01:02:44.364724 | orchestrator | skipping: [testbed-node-1] 2026-01-17 01:02:44.364732 | orchestrator | skipping: [testbed-node-2] 2026-01-17 01:02:44.364740 | orchestrator | 2026-01-17 01:02:44.364748 | orchestrator | TASK [keystone : Get file list in custom domains folder] *********************** 2026-01-17 01:02:44.364755 | orchestrator | Saturday 17 January 2026 01:00:10 +0000 (0:00:00.616) 0:00:21.867 ****** 2026-01-17 01:02:44.364763 | orchestrator | skipping: [testbed-node-0] 2026-01-17 01:02:44.364770 | orchestrator | skipping: [testbed-node-1] 2026-01-17 01:02:44.364778 | orchestrator | skipping: [testbed-node-2] 2026-01-17 01:02:44.364786 | orchestrator | 2026-01-17 01:02:44.364794 | orchestrator | TASK [keystone : Copying Keystone Domain specific settings] ******************** 2026-01-17 01:02:44.364801 | orchestrator | Saturday 17 January 2026 01:00:10 +0000 (0:00:00.308) 0:00:22.176 ****** 2026-01-17 01:02:44.364809 | orchestrator | skipping: [testbed-node-0] 2026-01-17 01:02:44.364817 | orchestrator | skipping: [testbed-node-1] 2026-01-17 01:02:44.364824 | orchestrator | skipping: [testbed-node-2] 2026-01-17 01:02:44.364832 | orchestrator | 2026-01-17 01:02:44.364840 | orchestrator | TASK [keystone : Copying over existing policy file] **************************** 2026-01-17 01:02:44.364847 | orchestrator | Saturday 17 January 2026 01:00:11 +0000 (0:00:00.534) 0:00:22.710 ****** 2026-01-17 01:02:44.364857 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-01-17 01:02:44.364877 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-01-17 01:02:44.364894 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-01-17 01:02:44.364903 | orchestrator | skipping: [testbed-node-1] 2026-01-17 01:02:44.364917 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-01-17 01:02:44.364926 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-01-17 01:02:44.364935 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-01-17 01:02:44.364943 | orchestrator | skipping: [testbed-node-0] 2026-01-17 01:02:44.364951 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-01-17 01:02:44.364969 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-01-17 01:02:44.364985 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-01-17 01:02:44.364993 | orchestrator | skipping: [testbed-node-2] 2026-01-17 01:02:44.365000 | orchestrator | 2026-01-17 01:02:44.365008 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-01-17 01:02:44.365016 | orchestrator | Saturday 17 January 2026 01:00:12 +0000 (0:00:00.896) 0:00:23.607 ****** 2026-01-17 01:02:44.365025 | orchestrator | skipping: [testbed-node-0] 2026-01-17 01:02:44.365032 | orchestrator | skipping: [testbed-node-1] 2026-01-17 01:02:44.365040 | orchestrator | skipping: [testbed-node-2] 2026-01-17 01:02:44.365048 | orchestrator | 2026-01-17 01:02:44.365056 | orchestrator | TASK [keystone : Copying over wsgi-keystone.conf] ****************************** 2026-01-17 01:02:44.365064 | orchestrator | Saturday 17 January 2026 01:00:12 +0000 (0:00:00.292) 0:00:23.899 ****** 2026-01-17 01:02:44.365072 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2026-01-17 01:02:44.365081 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2026-01-17 01:02:44.365088 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2026-01-17 01:02:44.365096 | orchestrator | 2026-01-17 01:02:44.365104 | orchestrator | TASK [keystone : Checking whether keystone-paste.ini file exists] ************** 2026-01-17 01:02:44.365112 | orchestrator | Saturday 17 January 2026 01:00:14 +0000 (0:00:01.654) 0:00:25.554 ****** 2026-01-17 01:02:44.365120 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-01-17 01:02:44.365129 | orchestrator | 2026-01-17 01:02:44.365137 | orchestrator | TASK [keystone : Copying over keystone-paste.ini] ****************************** 2026-01-17 01:02:44.365144 | orchestrator | Saturday 17 January 2026 01:00:15 +0000 (0:00:01.055) 0:00:26.609 ****** 2026-01-17 01:02:44.365152 | orchestrator | skipping: [testbed-node-1] 2026-01-17 01:02:44.365160 | orchestrator | skipping: [testbed-node-0] 2026-01-17 01:02:44.365168 | orchestrator | skipping: [testbed-node-2] 2026-01-17 01:02:44.365176 | orchestrator | 2026-01-17 01:02:44.365184 | orchestrator | TASK [keystone : Generate the required cron jobs for the node] ***************** 2026-01-17 01:02:44.365197 | orchestrator | Saturday 17 January 2026 01:00:16 +0000 (0:00:00.883) 0:00:27.493 ****** 2026-01-17 01:02:44.365205 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-01-17 01:02:44.365213 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-01-17 01:02:44.365221 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-01-17 01:02:44.365228 | orchestrator | 2026-01-17 01:02:44.365235 | orchestrator | TASK [keystone : Set fact with the generated cron jobs for building the crontab later] *** 2026-01-17 01:02:44.365243 | orchestrator | Saturday 17 January 2026 01:00:17 +0000 (0:00:01.115) 0:00:28.609 ****** 2026-01-17 01:02:44.365250 | orchestrator | ok: [testbed-node-0] 2026-01-17 01:02:44.365258 | orchestrator | ok: [testbed-node-1] 2026-01-17 01:02:44.365265 | orchestrator | ok: [testbed-node-2] 2026-01-17 01:02:44.365272 | orchestrator | 2026-01-17 01:02:44.365280 | orchestrator | TASK [keystone : Copying files for keystone-fernet] **************************** 2026-01-17 01:02:44.365287 | orchestrator | Saturday 17 January 2026 01:00:17 +0000 (0:00:00.319) 0:00:28.929 ****** 2026-01-17 01:02:44.365294 | orchestrator | changed: [testbed-node-0] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2026-01-17 01:02:44.365302 | orchestrator | changed: [testbed-node-1] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2026-01-17 01:02:44.365309 | orchestrator | changed: [testbed-node-2] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2026-01-17 01:02:44.365317 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2026-01-17 01:02:44.365325 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2026-01-17 01:02:44.365333 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2026-01-17 01:02:44.365340 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2026-01-17 01:02:44.365348 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2026-01-17 01:02:44.365355 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2026-01-17 01:02:44.365362 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2026-01-17 01:02:44.365370 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2026-01-17 01:02:44.365377 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2026-01-17 01:02:44.365385 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2026-01-17 01:02:44.365392 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2026-01-17 01:02:44.365405 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2026-01-17 01:02:44.365412 | orchestrator | changed: [testbed-node-0] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-01-17 01:02:44.365420 | orchestrator | changed: [testbed-node-1] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-01-17 01:02:44.365427 | orchestrator | changed: [testbed-node-2] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-01-17 01:02:44.365435 | orchestrator | changed: [testbed-node-0] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-01-17 01:02:44.365443 | orchestrator | changed: [testbed-node-1] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-01-17 01:02:44.365454 | orchestrator | changed: [testbed-node-2] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-01-17 01:02:44.365462 | orchestrator | 2026-01-17 01:02:44.365470 | orchestrator | TASK [keystone : Copying files for keystone-ssh] ******************************* 2026-01-17 01:02:44.365478 | orchestrator | Saturday 17 January 2026 01:00:27 +0000 (0:00:09.513) 0:00:38.442 ****** 2026-01-17 01:02:44.365485 | orchestrator | changed: [testbed-node-1] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-01-17 01:02:44.365498 | orchestrator | changed: [testbed-node-0] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-01-17 01:02:44.365506 | orchestrator | changed: [testbed-node-2] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-01-17 01:02:44.365514 | orchestrator | changed: [testbed-node-1] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-01-17 01:02:44.365522 | orchestrator | changed: [testbed-node-0] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-01-17 01:02:44.365529 | orchestrator | changed: [testbed-node-2] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-01-17 01:02:44.365537 | orchestrator | 2026-01-17 01:02:44.365545 | orchestrator | TASK [keystone : Check keystone containers] ************************************ 2026-01-17 01:02:44.365553 | orchestrator | Saturday 17 January 2026 01:00:30 +0000 (0:00:03.152) 0:00:41.595 ****** 2026-01-17 01:02:44.365562 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-01-17 01:02:44.365571 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-01-17 01:02:44.365604 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-01-17 01:02:44.365635 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-01-17 01:02:44.365643 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-01-17 01:02:44.365652 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-01-17 01:02:44.365660 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-01-17 01:02:44.365667 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-01-17 01:02:44.365675 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-01-17 01:02:44.365683 | orchestrator | 2026-01-17 01:02:44.365695 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-01-17 01:02:44.365702 | orchestrator | Saturday 17 January 2026 01:00:32 +0000 (0:00:02.491) 0:00:44.086 ****** 2026-01-17 01:02:44.365710 | orchestrator | skipping: [testbed-node-0] 2026-01-17 01:02:44.365722 | orchestrator | skipping: [testbed-node-1] 2026-01-17 01:02:44.365729 | orchestrator | skipping: [testbed-node-2] 2026-01-17 01:02:44.365737 | orchestrator | 2026-01-17 01:02:44.365744 | orchestrator | TASK [keystone : Creating keystone database] *********************************** 2026-01-17 01:02:44.365751 | orchestrator | Saturday 17 January 2026 01:00:33 +0000 (0:00:00.308) 0:00:44.395 ****** 2026-01-17 01:02:44.365759 | orchestrator | changed: [testbed-node-0] 2026-01-17 01:02:44.365767 | orchestrator | 2026-01-17 01:02:44.365777 | orchestrator | TASK [keystone : Creating Keystone database user and setting permissions] ****** 2026-01-17 01:02:44.365785 | orchestrator | Saturday 17 January 2026 01:00:35 +0000 (0:00:02.340) 0:00:46.736 ****** 2026-01-17 01:02:44.365793 | orchestrator | changed: [testbed-node-0] 2026-01-17 01:02:44.365800 | orchestrator | 2026-01-17 01:02:44.365808 | orchestrator | TASK [keystone : Checking for any running keystone_fernet containers] ********** 2026-01-17 01:02:44.365816 | orchestrator | Saturday 17 January 2026 01:00:37 +0000 (0:00:02.460) 0:00:49.196 ****** 2026-01-17 01:02:44.365825 | orchestrator | ok: [testbed-node-0] 2026-01-17 01:02:44.365832 | orchestrator | ok: [testbed-node-1] 2026-01-17 01:02:44.365839 | orchestrator | ok: [testbed-node-2] 2026-01-17 01:02:44.365846 | orchestrator | 2026-01-17 01:02:44.365854 | orchestrator | TASK [keystone : Group nodes where keystone_fernet is running] ***************** 2026-01-17 01:02:44.365862 | orchestrator | Saturday 17 January 2026 01:00:39 +0000 (0:00:01.160) 0:00:50.357 ****** 2026-01-17 01:02:44.365869 | orchestrator | ok: [testbed-node-0] 2026-01-17 01:02:44.365876 | orchestrator | ok: [testbed-node-1] 2026-01-17 01:02:44.365888 | orchestrator | ok: [testbed-node-2] 2026-01-17 01:02:44.365901 | orchestrator | 2026-01-17 01:02:44.365910 | orchestrator | TASK [keystone : Fail if any hosts need bootstrapping and not all hosts targeted] *** 2026-01-17 01:02:44.365918 | orchestrator | Saturday 17 January 2026 01:00:39 +0000 (0:00:00.308) 0:00:50.666 ****** 2026-01-17 01:02:44.365926 | orchestrator | skipping: [testbed-node-0] 2026-01-17 01:02:44.365935 | orchestrator | skipping: [testbed-node-1] 2026-01-17 01:02:44.365944 | orchestrator | skipping: [testbed-node-2] 2026-01-17 01:02:44.365951 | orchestrator | 2026-01-17 01:02:44.365959 | orchestrator | TASK [keystone : Running Keystone bootstrap container] ************************* 2026-01-17 01:02:44.365967 | orchestrator | Saturday 17 January 2026 01:00:39 +0000 (0:00:00.343) 0:00:51.010 ****** 2026-01-17 01:02:44.365974 | orchestrator | changed: [testbed-node-0] 2026-01-17 01:02:44.365982 | orchestrator | 2026-01-17 01:02:44.365990 | orchestrator | TASK [keystone : Running Keystone fernet bootstrap container] ****************** 2026-01-17 01:02:44.365997 | orchestrator | Saturday 17 January 2026 01:00:55 +0000 (0:00:15.337) 0:01:06.347 ****** 2026-01-17 01:02:44.366005 | orchestrator | changed: [testbed-node-0] 2026-01-17 01:02:44.366012 | orchestrator | 2026-01-17 01:02:44.366074 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2026-01-17 01:02:44.366083 | orchestrator | Saturday 17 January 2026 01:01:07 +0000 (0:00:12.028) 0:01:18.376 ****** 2026-01-17 01:02:44.366091 | orchestrator | 2026-01-17 01:02:44.366098 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2026-01-17 01:02:44.366106 | orchestrator | Saturday 17 January 2026 01:01:07 +0000 (0:00:00.065) 0:01:18.441 ****** 2026-01-17 01:02:44.366114 | orchestrator | 2026-01-17 01:02:44.366121 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2026-01-17 01:02:44.366129 | orchestrator | Saturday 17 January 2026 01:01:07 +0000 (0:00:00.065) 0:01:18.507 ****** 2026-01-17 01:02:44.366136 | orchestrator | 2026-01-17 01:02:44.366144 | orchestrator | RUNNING HANDLER [keystone : Restart keystone-ssh container] ******************** 2026-01-17 01:02:44.366151 | orchestrator | Saturday 17 January 2026 01:01:07 +0000 (0:00:00.064) 0:01:18.571 ****** 2026-01-17 01:02:44.366158 | orchestrator | changed: [testbed-node-0] 2026-01-17 01:02:44.366165 | orchestrator | changed: [testbed-node-1] 2026-01-17 01:02:44.366173 | orchestrator | changed: [testbed-node-2] 2026-01-17 01:02:44.366181 | orchestrator | 2026-01-17 01:02:44.366188 | orchestrator | RUNNING HANDLER [keystone : Restart keystone-fernet container] ***************** 2026-01-17 01:02:44.366205 | orchestrator | Saturday 17 January 2026 01:01:21 +0000 (0:00:13.937) 0:01:32.509 ****** 2026-01-17 01:02:44.366213 | orchestrator | changed: [testbed-node-0] 2026-01-17 01:02:44.366221 | orchestrator | changed: [testbed-node-2] 2026-01-17 01:02:44.366229 | orchestrator | changed: [testbed-node-1] 2026-01-17 01:02:44.366236 | orchestrator | 2026-01-17 01:02:44.366243 | orchestrator | RUNNING HANDLER [keystone : Restart keystone container] ************************ 2026-01-17 01:02:44.366251 | orchestrator | Saturday 17 January 2026 01:01:31 +0000 (0:00:10.354) 0:01:42.864 ****** 2026-01-17 01:02:44.366259 | orchestrator | changed: [testbed-node-0] 2026-01-17 01:02:44.366266 | orchestrator | changed: [testbed-node-1] 2026-01-17 01:02:44.366274 | orchestrator | changed: [testbed-node-2] 2026-01-17 01:02:44.366282 | orchestrator | 2026-01-17 01:02:44.366289 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-01-17 01:02:44.366296 | orchestrator | Saturday 17 January 2026 01:01:43 +0000 (0:00:11.677) 0:01:54.541 ****** 2026-01-17 01:02:44.366305 | orchestrator | included: /ansible/roles/keystone/tasks/distribute_fernet.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-17 01:02:44.366313 | orchestrator | 2026-01-17 01:02:44.366321 | orchestrator | TASK [keystone : Waiting for Keystone SSH port to be UP] *********************** 2026-01-17 01:02:44.366329 | orchestrator | Saturday 17 January 2026 01:01:44 +0000 (0:00:00.763) 0:01:55.304 ****** 2026-01-17 01:02:44.366337 | orchestrator | ok: [testbed-node-0] 2026-01-17 01:02:44.366345 | orchestrator | ok: [testbed-node-1] 2026-01-17 01:02:44.366353 | orchestrator | ok: [testbed-node-2] 2026-01-17 01:02:44.366360 | orchestrator | 2026-01-17 01:02:44.366368 | orchestrator | TASK [keystone : Run key distribution] ***************************************** 2026-01-17 01:02:44.366377 | orchestrator | Saturday 17 January 2026 01:01:44 +0000 (0:00:00.788) 0:01:56.093 ****** 2026-01-17 01:02:44.366384 | orchestrator | changed: [testbed-node-0] 2026-01-17 01:02:44.366391 | orchestrator | 2026-01-17 01:02:44.366399 | orchestrator | TASK [keystone : Creating admin project, user, role, service, and endpoint] **** 2026-01-17 01:02:44.366407 | orchestrator | Saturday 17 January 2026 01:01:46 +0000 (0:00:01.714) 0:01:57.808 ****** 2026-01-17 01:02:44.366425 | orchestrator | changed: [testbed-node-0] => (item=RegionOne) 2026-01-17 01:02:44.366433 | orchestrator | 2026-01-17 01:02:44.366441 | orchestrator | TASK [service-ks-register : keystone | Creating services] ********************** 2026-01-17 01:02:44.366449 | orchestrator | Saturday 17 January 2026 01:01:59 +0000 (0:00:13.077) 0:02:10.886 ****** 2026-01-17 01:02:44.366457 | orchestrator | changed: [testbed-node-0] => (item=keystone (identity)) 2026-01-17 01:02:44.366464 | orchestrator | 2026-01-17 01:02:44.366473 | orchestrator | TASK [service-ks-register : keystone | Creating endpoints] ********************* 2026-01-17 01:02:44.366481 | orchestrator | Saturday 17 January 2026 01:02:28 +0000 (0:00:29.179) 0:02:40.066 ****** 2026-01-17 01:02:44.366496 | orchestrator | ok: [testbed-node-0] => (item=keystone -> https://api-int.testbed.osism.xyz:5000 -> internal) 2026-01-17 01:02:44.366507 | orchestrator | ok: [testbed-node-0] => (item=keystone -> https://api.testbed.osism.xyz:5000 -> public) 2026-01-17 01:02:44.366514 | orchestrator | 2026-01-17 01:02:44.366521 | orchestrator | TASK [service-ks-register : keystone | Creating projects] ********************** 2026-01-17 01:02:44.366528 | orchestrator | Saturday 17 January 2026 01:02:36 +0000 (0:00:07.530) 0:02:47.596 ****** 2026-01-17 01:02:44.366536 | orchestrator | skipping: [testbed-node-0] 2026-01-17 01:02:44.366651 | orchestrator | 2026-01-17 01:02:44.366663 | orchestrator | TASK [service-ks-register : keystone | Creating users] ************************* 2026-01-17 01:02:44.366671 | orchestrator | Saturday 17 January 2026 01:02:36 +0000 (0:00:00.176) 0:02:47.773 ****** 2026-01-17 01:02:44.366679 | orchestrator | skipping: [testbed-node-0] 2026-01-17 01:02:44.366686 | orchestrator | 2026-01-17 01:02:44.366694 | orchestrator | TASK [service-ks-register : keystone | Creating roles] ************************* 2026-01-17 01:02:44.366703 | orchestrator | Saturday 17 January 2026 01:02:36 +0000 (0:00:00.162) 0:02:47.936 ****** 2026-01-17 01:02:44.366711 | orchestrator | skipping: [testbed-node-0] 2026-01-17 01:02:44.366719 | orchestrator | 2026-01-17 01:02:44.366728 | orchestrator | TASK [service-ks-register : keystone | Granting user roles] ******************** 2026-01-17 01:02:44.366744 | orchestrator | Saturday 17 January 2026 01:02:36 +0000 (0:00:00.141) 0:02:48.078 ****** 2026-01-17 01:02:44.366752 | orchestrator | skipping: [testbed-node-0] 2026-01-17 01:02:44.366759 | orchestrator | 2026-01-17 01:02:44.366767 | orchestrator | TASK [keystone : Creating default user role] *********************************** 2026-01-17 01:02:44.366774 | orchestrator | Saturday 17 January 2026 01:02:37 +0000 (0:00:00.540) 0:02:48.618 ****** 2026-01-17 01:02:44.366782 | orchestrator | ok: [testbed-node-0] 2026-01-17 01:02:44.366790 | orchestrator | 2026-01-17 01:02:44.366797 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-01-17 01:02:44.366805 | orchestrator | Saturday 17 January 2026 01:02:40 +0000 (0:00:03.409) 0:02:52.028 ****** 2026-01-17 01:02:44.366813 | orchestrator | skipping: [testbed-node-0] 2026-01-17 01:02:44.366822 | orchestrator | skipping: [testbed-node-1] 2026-01-17 01:02:44.366830 | orchestrator | skipping: [testbed-node-2] 2026-01-17 01:02:44.366838 | orchestrator | 2026-01-17 01:02:44.366847 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-17 01:02:44.366857 | orchestrator | testbed-node-0 : ok=33  changed=19  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-01-17 01:02:44.366867 | orchestrator | testbed-node-1 : ok=22  changed=12  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-01-17 01:02:44.366876 | orchestrator | testbed-node-2 : ok=22  changed=12  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-01-17 01:02:44.366884 | orchestrator | 2026-01-17 01:02:44.366892 | orchestrator | 2026-01-17 01:02:44.366901 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-17 01:02:44.366910 | orchestrator | Saturday 17 January 2026 01:02:41 +0000 (0:00:00.426) 0:02:52.455 ****** 2026-01-17 01:02:44.366918 | orchestrator | =============================================================================== 2026-01-17 01:02:44.366925 | orchestrator | service-ks-register : keystone | Creating services --------------------- 29.18s 2026-01-17 01:02:44.366933 | orchestrator | keystone : Running Keystone bootstrap container ------------------------ 15.34s 2026-01-17 01:02:44.366941 | orchestrator | keystone : Restart keystone-ssh container ------------------------------ 13.94s 2026-01-17 01:02:44.366949 | orchestrator | keystone : Creating admin project, user, role, service, and endpoint --- 13.08s 2026-01-17 01:02:44.366957 | orchestrator | keystone : Running Keystone fernet bootstrap container ----------------- 12.03s 2026-01-17 01:02:44.366966 | orchestrator | keystone : Restart keystone container ---------------------------------- 11.68s 2026-01-17 01:02:44.366974 | orchestrator | keystone : Restart keystone-fernet container --------------------------- 10.35s 2026-01-17 01:02:44.366982 | orchestrator | keystone : Copying files for keystone-fernet ---------------------------- 9.51s 2026-01-17 01:02:44.367011 | orchestrator | service-ks-register : keystone | Creating endpoints --------------------- 7.53s 2026-01-17 01:02:44.367020 | orchestrator | keystone : Copying over keystone.conf ----------------------------------- 5.66s 2026-01-17 01:02:44.367029 | orchestrator | keystone : Copying over config.json files for services ------------------ 3.59s 2026-01-17 01:02:44.367037 | orchestrator | service-cert-copy : keystone | Copying over extra CA certificates ------- 3.52s 2026-01-17 01:02:44.367046 | orchestrator | keystone : Creating default user role ----------------------------------- 3.41s 2026-01-17 01:02:44.367054 | orchestrator | keystone : Copying files for keystone-ssh ------------------------------- 3.15s 2026-01-17 01:02:44.367062 | orchestrator | keystone : Check keystone containers ------------------------------------ 2.49s 2026-01-17 01:02:44.367098 | orchestrator | keystone : Creating Keystone database user and setting permissions ------ 2.46s 2026-01-17 01:02:44.367107 | orchestrator | keystone : Creating keystone database ----------------------------------- 2.34s 2026-01-17 01:02:44.367122 | orchestrator | keystone : Ensuring config directories exist ---------------------------- 1.85s 2026-01-17 01:02:44.367139 | orchestrator | keystone : Run key distribution ----------------------------------------- 1.71s 2026-01-17 01:02:44.367147 | orchestrator | keystone : Copying over wsgi-keystone.conf ------------------------------ 1.65s 2026-01-17 01:02:44.367155 | orchestrator | 2026-01-17 01:02:44 | INFO  | Task 29e001bb-6e11-4a45-8c2e-826bccfc99aa is in state STARTED 2026-01-17 01:02:44.367162 | orchestrator | 2026-01-17 01:02:44 | INFO  | Wait 1 second(s) until the next check 2026-01-17 01:02:47.394365 | orchestrator | 2026-01-17 01:02:47 | INFO  | Task f8c4b6e6-3f8b-436e-a1c0-08373c0c22cd is in state STARTED 2026-01-17 01:02:47.394452 | orchestrator | 2026-01-17 01:02:47 | INFO  | Task 81129f70-4b45-4ffd-ac06-457ca49d1734 is in state STARTED 2026-01-17 01:02:47.394463 | orchestrator | 2026-01-17 01:02:47 | INFO  | Task 805bef93-6354-4651-aafc-bd9af69e32d1 is in state STARTED 2026-01-17 01:02:47.395777 | orchestrator | 2026-01-17 01:02:47 | INFO  | Task 7c0cfeff-5f53-4e00-8a2f-3700890c45c2 is in state STARTED 2026-01-17 01:02:47.395872 | orchestrator | 2026-01-17 01:02:47 | INFO  | Task 29e001bb-6e11-4a45-8c2e-826bccfc99aa is in state STARTED 2026-01-17 01:02:47.395896 | orchestrator | 2026-01-17 01:02:47 | INFO  | Wait 1 second(s) until the next check 2026-01-17 01:02:50.435769 | orchestrator | 2026-01-17 01:02:50 | INFO  | Task f8c4b6e6-3f8b-436e-a1c0-08373c0c22cd is in state STARTED 2026-01-17 01:02:50.436494 | orchestrator | 2026-01-17 01:02:50 | INFO  | Task 81129f70-4b45-4ffd-ac06-457ca49d1734 is in state STARTED 2026-01-17 01:02:50.437866 | orchestrator | 2026-01-17 01:02:50 | INFO  | Task 805bef93-6354-4651-aafc-bd9af69e32d1 is in state STARTED 2026-01-17 01:02:50.439002 | orchestrator | 2026-01-17 01:02:50 | INFO  | Task 7c0cfeff-5f53-4e00-8a2f-3700890c45c2 is in state STARTED 2026-01-17 01:02:50.440947 | orchestrator | 2026-01-17 01:02:50 | INFO  | Task 29e001bb-6e11-4a45-8c2e-826bccfc99aa is in state STARTED 2026-01-17 01:02:50.440986 | orchestrator | 2026-01-17 01:02:50 | INFO  | Wait 1 second(s) until the next check 2026-01-17 01:02:53.508367 | orchestrator | 2026-01-17 01:02:53 | INFO  | Task f8c4b6e6-3f8b-436e-a1c0-08373c0c22cd is in state STARTED 2026-01-17 01:02:53.510089 | orchestrator | 2026-01-17 01:02:53 | INFO  | Task 81129f70-4b45-4ffd-ac06-457ca49d1734 is in state STARTED 2026-01-17 01:02:53.512615 | orchestrator | 2026-01-17 01:02:53 | INFO  | Task 805bef93-6354-4651-aafc-bd9af69e32d1 is in state STARTED 2026-01-17 01:02:53.514060 | orchestrator | 2026-01-17 01:02:53 | INFO  | Task 7c0cfeff-5f53-4e00-8a2f-3700890c45c2 is in state STARTED 2026-01-17 01:02:53.516150 | orchestrator | 2026-01-17 01:02:53 | INFO  | Task 29e001bb-6e11-4a45-8c2e-826bccfc99aa is in state STARTED 2026-01-17 01:02:53.516536 | orchestrator | 2026-01-17 01:02:53 | INFO  | Wait 1 second(s) until the next check 2026-01-17 01:02:56.585112 | orchestrator | 2026-01-17 01:02:56 | INFO  | Task f8c4b6e6-3f8b-436e-a1c0-08373c0c22cd is in state STARTED 2026-01-17 01:02:56.591393 | orchestrator | 2026-01-17 01:02:56 | INFO  | Task 81129f70-4b45-4ffd-ac06-457ca49d1734 is in state STARTED 2026-01-17 01:02:56.592889 | orchestrator | 2026-01-17 01:02:56 | INFO  | Task 805bef93-6354-4651-aafc-bd9af69e32d1 is in state SUCCESS 2026-01-17 01:02:56.595051 | orchestrator | 2026-01-17 01:02:56 | INFO  | Task 7c0cfeff-5f53-4e00-8a2f-3700890c45c2 is in state STARTED 2026-01-17 01:02:56.596051 | orchestrator | 2026-01-17 01:02:56 | INFO  | Task 29e001bb-6e11-4a45-8c2e-826bccfc99aa is in state STARTED 2026-01-17 01:02:56.596090 | orchestrator | 2026-01-17 01:02:56 | INFO  | Wait 1 second(s) until the next check 2026-01-17 01:02:59.661103 | orchestrator | 2026-01-17 01:02:59 | INFO  | Task f8c4b6e6-3f8b-436e-a1c0-08373c0c22cd is in state STARTED 2026-01-17 01:02:59.664344 | orchestrator | 2026-01-17 01:02:59 | INFO  | Task 81129f70-4b45-4ffd-ac06-457ca49d1734 is in state STARTED 2026-01-17 01:02:59.666397 | orchestrator | 2026-01-17 01:02:59 | INFO  | Task 7c0cfeff-5f53-4e00-8a2f-3700890c45c2 is in state STARTED 2026-01-17 01:02:59.670829 | orchestrator | 2026-01-17 01:02:59 | INFO  | Task 47dc7220-be13-4f38-9e9d-2b9aa2a6c798 is in state STARTED 2026-01-17 01:02:59.675660 | orchestrator | 2026-01-17 01:02:59 | INFO  | Task 29e001bb-6e11-4a45-8c2e-826bccfc99aa is in state STARTED 2026-01-17 01:02:59.677182 | orchestrator | 2026-01-17 01:02:59 | INFO  | Wait 1 second(s) until the next check 2026-01-17 01:03:02.742147 | orchestrator | 2026-01-17 01:03:02 | INFO  | Task f8c4b6e6-3f8b-436e-a1c0-08373c0c22cd is in state STARTED 2026-01-17 01:03:02.743399 | orchestrator | 2026-01-17 01:03:02 | INFO  | Task 81129f70-4b45-4ffd-ac06-457ca49d1734 is in state STARTED 2026-01-17 01:03:02.744777 | orchestrator | 2026-01-17 01:03:02 | INFO  | Task 7c0cfeff-5f53-4e00-8a2f-3700890c45c2 is in state STARTED 2026-01-17 01:03:02.747322 | orchestrator | 2026-01-17 01:03:02 | INFO  | Task 47dc7220-be13-4f38-9e9d-2b9aa2a6c798 is in state STARTED 2026-01-17 01:03:02.749096 | orchestrator | 2026-01-17 01:03:02 | INFO  | Task 29e001bb-6e11-4a45-8c2e-826bccfc99aa is in state STARTED 2026-01-17 01:03:02.749163 | orchestrator | 2026-01-17 01:03:02 | INFO  | Wait 1 second(s) until the next check 2026-01-17 01:03:05.800152 | orchestrator | 2026-01-17 01:03:05 | INFO  | Task f8c4b6e6-3f8b-436e-a1c0-08373c0c22cd is in state STARTED 2026-01-17 01:03:05.801913 | orchestrator | 2026-01-17 01:03:05 | INFO  | Task 81129f70-4b45-4ffd-ac06-457ca49d1734 is in state STARTED 2026-01-17 01:03:05.803717 | orchestrator | 2026-01-17 01:03:05 | INFO  | Task 7c0cfeff-5f53-4e00-8a2f-3700890c45c2 is in state STARTED 2026-01-17 01:03:05.805728 | orchestrator | 2026-01-17 01:03:05 | INFO  | Task 47dc7220-be13-4f38-9e9d-2b9aa2a6c798 is in state STARTED 2026-01-17 01:03:05.807230 | orchestrator | 2026-01-17 01:03:05 | INFO  | Task 29e001bb-6e11-4a45-8c2e-826bccfc99aa is in state STARTED 2026-01-17 01:03:05.807286 | orchestrator | 2026-01-17 01:03:05 | INFO  | Wait 1 second(s) until the next check 2026-01-17 01:03:08.857920 | orchestrator | 2026-01-17 01:03:08 | INFO  | Task f8c4b6e6-3f8b-436e-a1c0-08373c0c22cd is in state STARTED 2026-01-17 01:03:08.860389 | orchestrator | 2026-01-17 01:03:08 | INFO  | Task 81129f70-4b45-4ffd-ac06-457ca49d1734 is in state STARTED 2026-01-17 01:03:08.863405 | orchestrator | 2026-01-17 01:03:08 | INFO  | Task 7c0cfeff-5f53-4e00-8a2f-3700890c45c2 is in state STARTED 2026-01-17 01:03:08.865375 | orchestrator | 2026-01-17 01:03:08 | INFO  | Task 47dc7220-be13-4f38-9e9d-2b9aa2a6c798 is in state STARTED 2026-01-17 01:03:08.866909 | orchestrator | 2026-01-17 01:03:08 | INFO  | Task 29e001bb-6e11-4a45-8c2e-826bccfc99aa is in state STARTED 2026-01-17 01:03:08.866969 | orchestrator | 2026-01-17 01:03:08 | INFO  | Wait 1 second(s) until the next check 2026-01-17 01:03:11.907996 | orchestrator | 2026-01-17 01:03:11 | INFO  | Task f8c4b6e6-3f8b-436e-a1c0-08373c0c22cd is in state STARTED 2026-01-17 01:03:11.910131 | orchestrator | 2026-01-17 01:03:11 | INFO  | Task 81129f70-4b45-4ffd-ac06-457ca49d1734 is in state STARTED 2026-01-17 01:03:11.912280 | orchestrator | 2026-01-17 01:03:11 | INFO  | Task 7c0cfeff-5f53-4e00-8a2f-3700890c45c2 is in state STARTED 2026-01-17 01:03:11.913535 | orchestrator | 2026-01-17 01:03:11 | INFO  | Task 47dc7220-be13-4f38-9e9d-2b9aa2a6c798 is in state STARTED 2026-01-17 01:03:11.915129 | orchestrator | 2026-01-17 01:03:11 | INFO  | Task 29e001bb-6e11-4a45-8c2e-826bccfc99aa is in state STARTED 2026-01-17 01:03:11.916072 | orchestrator | 2026-01-17 01:03:11 | INFO  | Wait 1 second(s) until the next check 2026-01-17 01:03:14.978954 | orchestrator | 2026-01-17 01:03:14 | INFO  | Task f8c4b6e6-3f8b-436e-a1c0-08373c0c22cd is in state STARTED 2026-01-17 01:03:14.979325 | orchestrator | 2026-01-17 01:03:14 | INFO  | Task 81129f70-4b45-4ffd-ac06-457ca49d1734 is in state STARTED 2026-01-17 01:03:14.980477 | orchestrator | 2026-01-17 01:03:14 | INFO  | Task 7c0cfeff-5f53-4e00-8a2f-3700890c45c2 is in state STARTED 2026-01-17 01:03:14.981226 | orchestrator | 2026-01-17 01:03:14 | INFO  | Task 47dc7220-be13-4f38-9e9d-2b9aa2a6c798 is in state STARTED 2026-01-17 01:03:14.982146 | orchestrator | 2026-01-17 01:03:14 | INFO  | Task 29e001bb-6e11-4a45-8c2e-826bccfc99aa is in state STARTED 2026-01-17 01:03:14.982183 | orchestrator | 2026-01-17 01:03:14 | INFO  | Wait 1 second(s) until the next check 2026-01-17 01:03:18.027004 | orchestrator | 2026-01-17 01:03:18 | INFO  | Task f8c4b6e6-3f8b-436e-a1c0-08373c0c22cd is in state STARTED 2026-01-17 01:03:18.027999 | orchestrator | 2026-01-17 01:03:18 | INFO  | Task 81129f70-4b45-4ffd-ac06-457ca49d1734 is in state STARTED 2026-01-17 01:03:18.028834 | orchestrator | 2026-01-17 01:03:18 | INFO  | Task 7c0cfeff-5f53-4e00-8a2f-3700890c45c2 is in state STARTED 2026-01-17 01:03:18.030092 | orchestrator | 2026-01-17 01:03:18 | INFO  | Task 47dc7220-be13-4f38-9e9d-2b9aa2a6c798 is in state STARTED 2026-01-17 01:03:18.031078 | orchestrator | 2026-01-17 01:03:18 | INFO  | Task 29e001bb-6e11-4a45-8c2e-826bccfc99aa is in state STARTED 2026-01-17 01:03:18.031152 | orchestrator | 2026-01-17 01:03:18 | INFO  | Wait 1 second(s) until the next check 2026-01-17 01:03:21.079409 | orchestrator | 2026-01-17 01:03:21 | INFO  | Task f8c4b6e6-3f8b-436e-a1c0-08373c0c22cd is in state STARTED 2026-01-17 01:03:21.080676 | orchestrator | 2026-01-17 01:03:21 | INFO  | Task 81129f70-4b45-4ffd-ac06-457ca49d1734 is in state STARTED 2026-01-17 01:03:21.082356 | orchestrator | 2026-01-17 01:03:21 | INFO  | Task 7c0cfeff-5f53-4e00-8a2f-3700890c45c2 is in state STARTED 2026-01-17 01:03:21.084196 | orchestrator | 2026-01-17 01:03:21 | INFO  | Task 47dc7220-be13-4f38-9e9d-2b9aa2a6c798 is in state STARTED 2026-01-17 01:03:21.085726 | orchestrator | 2026-01-17 01:03:21 | INFO  | Task 29e001bb-6e11-4a45-8c2e-826bccfc99aa is in state STARTED 2026-01-17 01:03:21.085754 | orchestrator | 2026-01-17 01:03:21 | INFO  | Wait 1 second(s) until the next check 2026-01-17 01:03:24.130460 | orchestrator | 2026-01-17 01:03:24 | INFO  | Task f8c4b6e6-3f8b-436e-a1c0-08373c0c22cd is in state STARTED 2026-01-17 01:03:24.131840 | orchestrator | 2026-01-17 01:03:24 | INFO  | Task 81129f70-4b45-4ffd-ac06-457ca49d1734 is in state STARTED 2026-01-17 01:03:24.132441 | orchestrator | 2026-01-17 01:03:24 | INFO  | Task 7c0cfeff-5f53-4e00-8a2f-3700890c45c2 is in state STARTED 2026-01-17 01:03:24.134398 | orchestrator | 2026-01-17 01:03:24 | INFO  | Task 47dc7220-be13-4f38-9e9d-2b9aa2a6c798 is in state STARTED 2026-01-17 01:03:24.135137 | orchestrator | 2026-01-17 01:03:24 | INFO  | Task 29e001bb-6e11-4a45-8c2e-826bccfc99aa is in state STARTED 2026-01-17 01:03:24.135182 | orchestrator | 2026-01-17 01:03:24 | INFO  | Wait 1 second(s) until the next check 2026-01-17 01:03:27.193481 | orchestrator | 2026-01-17 01:03:27 | INFO  | Task f8c4b6e6-3f8b-436e-a1c0-08373c0c22cd is in state STARTED 2026-01-17 01:03:27.195836 | orchestrator | 2026-01-17 01:03:27 | INFO  | Task 81129f70-4b45-4ffd-ac06-457ca49d1734 is in state STARTED 2026-01-17 01:03:27.197908 | orchestrator | 2026-01-17 01:03:27 | INFO  | Task 7c0cfeff-5f53-4e00-8a2f-3700890c45c2 is in state STARTED 2026-01-17 01:03:27.198830 | orchestrator | 2026-01-17 01:03:27 | INFO  | Task 47dc7220-be13-4f38-9e9d-2b9aa2a6c798 is in state STARTED 2026-01-17 01:03:27.199786 | orchestrator | 2026-01-17 01:03:27 | INFO  | Task 29e001bb-6e11-4a45-8c2e-826bccfc99aa is in state STARTED 2026-01-17 01:03:27.199861 | orchestrator | 2026-01-17 01:03:27 | INFO  | Wait 1 second(s) until the next check 2026-01-17 01:03:30.238688 | orchestrator | 2026-01-17 01:03:30 | INFO  | Task f8c4b6e6-3f8b-436e-a1c0-08373c0c22cd is in state STARTED 2026-01-17 01:03:30.241359 | orchestrator | 2026-01-17 01:03:30 | INFO  | Task 81129f70-4b45-4ffd-ac06-457ca49d1734 is in state STARTED 2026-01-17 01:03:30.241726 | orchestrator | 2026-01-17 01:03:30 | INFO  | Task 7c0cfeff-5f53-4e00-8a2f-3700890c45c2 is in state STARTED 2026-01-17 01:03:30.242986 | orchestrator | 2026-01-17 01:03:30 | INFO  | Task 47dc7220-be13-4f38-9e9d-2b9aa2a6c798 is in state STARTED 2026-01-17 01:03:30.244586 | orchestrator | 2026-01-17 01:03:30 | INFO  | Task 29e001bb-6e11-4a45-8c2e-826bccfc99aa is in state STARTED 2026-01-17 01:03:30.244634 | orchestrator | 2026-01-17 01:03:30 | INFO  | Wait 1 second(s) until the next check 2026-01-17 01:03:33.265896 | orchestrator | 2026-01-17 01:03:33 | INFO  | Task f8c4b6e6-3f8b-436e-a1c0-08373c0c22cd is in state STARTED 2026-01-17 01:03:33.266311 | orchestrator | 2026-01-17 01:03:33 | INFO  | Task 81129f70-4b45-4ffd-ac06-457ca49d1734 is in state STARTED 2026-01-17 01:03:33.267432 | orchestrator | 2026-01-17 01:03:33 | INFO  | Task 7c0cfeff-5f53-4e00-8a2f-3700890c45c2 is in state STARTED 2026-01-17 01:03:33.268373 | orchestrator | 2026-01-17 01:03:33 | INFO  | Task 47dc7220-be13-4f38-9e9d-2b9aa2a6c798 is in state STARTED 2026-01-17 01:03:33.269189 | orchestrator | 2026-01-17 01:03:33 | INFO  | Task 29e001bb-6e11-4a45-8c2e-826bccfc99aa is in state STARTED 2026-01-17 01:03:33.269213 | orchestrator | 2026-01-17 01:03:33 | INFO  | Wait 1 second(s) until the next check 2026-01-17 01:03:36.308903 | orchestrator | 2026-01-17 01:03:36 | INFO  | Task f8c4b6e6-3f8b-436e-a1c0-08373c0c22cd is in state STARTED 2026-01-17 01:03:36.309706 | orchestrator | 2026-01-17 01:03:36 | INFO  | Task 81129f70-4b45-4ffd-ac06-457ca49d1734 is in state STARTED 2026-01-17 01:03:36.310935 | orchestrator | 2026-01-17 01:03:36 | INFO  | Task 7c0cfeff-5f53-4e00-8a2f-3700890c45c2 is in state STARTED 2026-01-17 01:03:36.311841 | orchestrator | 2026-01-17 01:03:36 | INFO  | Task 47dc7220-be13-4f38-9e9d-2b9aa2a6c798 is in state STARTED 2026-01-17 01:03:36.312790 | orchestrator | 2026-01-17 01:03:36 | INFO  | Task 29e001bb-6e11-4a45-8c2e-826bccfc99aa is in state STARTED 2026-01-17 01:03:36.312827 | orchestrator | 2026-01-17 01:03:36 | INFO  | Wait 1 second(s) until the next check 2026-01-17 01:03:39.350452 | orchestrator | 2026-01-17 01:03:39 | INFO  | Task f8c4b6e6-3f8b-436e-a1c0-08373c0c22cd is in state STARTED 2026-01-17 01:03:39.352257 | orchestrator | 2026-01-17 01:03:39 | INFO  | Task 81129f70-4b45-4ffd-ac06-457ca49d1734 is in state STARTED 2026-01-17 01:03:39.353067 | orchestrator | 2026-01-17 01:03:39 | INFO  | Task 7c0cfeff-5f53-4e00-8a2f-3700890c45c2 is in state STARTED 2026-01-17 01:03:39.353953 | orchestrator | 2026-01-17 01:03:39 | INFO  | Task 47dc7220-be13-4f38-9e9d-2b9aa2a6c798 is in state STARTED 2026-01-17 01:03:39.355087 | orchestrator | 2026-01-17 01:03:39 | INFO  | Task 29e001bb-6e11-4a45-8c2e-826bccfc99aa is in state STARTED 2026-01-17 01:03:39.355116 | orchestrator | 2026-01-17 01:03:39 | INFO  | Wait 1 second(s) until the next check 2026-01-17 01:03:42.389206 | orchestrator | 2026-01-17 01:03:42 | INFO  | Task f8c4b6e6-3f8b-436e-a1c0-08373c0c22cd is in state STARTED 2026-01-17 01:03:42.389306 | orchestrator | 2026-01-17 01:03:42 | INFO  | Task 81129f70-4b45-4ffd-ac06-457ca49d1734 is in state STARTED 2026-01-17 01:03:42.390139 | orchestrator | 2026-01-17 01:03:42 | INFO  | Task 7c0cfeff-5f53-4e00-8a2f-3700890c45c2 is in state STARTED 2026-01-17 01:03:42.390467 | orchestrator | 2026-01-17 01:03:42 | INFO  | Task 47dc7220-be13-4f38-9e9d-2b9aa2a6c798 is in state STARTED 2026-01-17 01:03:42.391115 | orchestrator | 2026-01-17 01:03:42 | INFO  | Task 29e001bb-6e11-4a45-8c2e-826bccfc99aa is in state STARTED 2026-01-17 01:03:42.391135 | orchestrator | 2026-01-17 01:03:42 | INFO  | Wait 1 second(s) until the next check 2026-01-17 01:03:45.421643 | orchestrator | 2026-01-17 01:03:45 | INFO  | Task f8c4b6e6-3f8b-436e-a1c0-08373c0c22cd is in state STARTED 2026-01-17 01:03:45.422318 | orchestrator | 2026-01-17 01:03:45 | INFO  | Task 81129f70-4b45-4ffd-ac06-457ca49d1734 is in state STARTED 2026-01-17 01:03:45.423227 | orchestrator | 2026-01-17 01:03:45 | INFO  | Task 7c0cfeff-5f53-4e00-8a2f-3700890c45c2 is in state STARTED 2026-01-17 01:03:45.426183 | orchestrator | 2026-01-17 01:03:45 | INFO  | Task 47dc7220-be13-4f38-9e9d-2b9aa2a6c798 is in state STARTED 2026-01-17 01:03:45.426545 | orchestrator | 2026-01-17 01:03:45 | INFO  | Task 29e001bb-6e11-4a45-8c2e-826bccfc99aa is in state STARTED 2026-01-17 01:03:45.426636 | orchestrator | 2026-01-17 01:03:45 | INFO  | Wait 1 second(s) until the next check 2026-01-17 01:03:48.454700 | orchestrator | 2026-01-17 01:03:48 | INFO  | Task f8c4b6e6-3f8b-436e-a1c0-08373c0c22cd is in state STARTED 2026-01-17 01:03:48.455057 | orchestrator | 2026-01-17 01:03:48 | INFO  | Task 81129f70-4b45-4ffd-ac06-457ca49d1734 is in state STARTED 2026-01-17 01:03:48.455830 | orchestrator | 2026-01-17 01:03:48 | INFO  | Task 7c0cfeff-5f53-4e00-8a2f-3700890c45c2 is in state STARTED 2026-01-17 01:03:48.456600 | orchestrator | 2026-01-17 01:03:48 | INFO  | Task 47dc7220-be13-4f38-9e9d-2b9aa2a6c798 is in state STARTED 2026-01-17 01:03:48.457146 | orchestrator | 2026-01-17 01:03:48 | INFO  | Task 29e001bb-6e11-4a45-8c2e-826bccfc99aa is in state STARTED 2026-01-17 01:03:48.457280 | orchestrator | 2026-01-17 01:03:48 | INFO  | Wait 1 second(s) until the next check 2026-01-17 01:03:51.498156 | orchestrator | 2026-01-17 01:03:51 | INFO  | Task f8c4b6e6-3f8b-436e-a1c0-08373c0c22cd is in state STARTED 2026-01-17 01:03:51.498281 | orchestrator | 2026-01-17 01:03:51 | INFO  | Task 81129f70-4b45-4ffd-ac06-457ca49d1734 is in state STARTED 2026-01-17 01:03:51.499149 | orchestrator | 2026-01-17 01:03:51 | INFO  | Task 7c0cfeff-5f53-4e00-8a2f-3700890c45c2 is in state STARTED 2026-01-17 01:03:51.499879 | orchestrator | 2026-01-17 01:03:51 | INFO  | Task 47dc7220-be13-4f38-9e9d-2b9aa2a6c798 is in state STARTED 2026-01-17 01:03:51.500803 | orchestrator | 2026-01-17 01:03:51 | INFO  | Task 29e001bb-6e11-4a45-8c2e-826bccfc99aa is in state STARTED 2026-01-17 01:03:51.500859 | orchestrator | 2026-01-17 01:03:51 | INFO  | Wait 1 second(s) until the next check 2026-01-17 01:03:54.532922 | orchestrator | 2026-01-17 01:03:54 | INFO  | Task f8c4b6e6-3f8b-436e-a1c0-08373c0c22cd is in state STARTED 2026-01-17 01:03:54.533673 | orchestrator | 2026-01-17 01:03:54 | INFO  | Task 81129f70-4b45-4ffd-ac06-457ca49d1734 is in state STARTED 2026-01-17 01:03:54.534734 | orchestrator | 2026-01-17 01:03:54 | INFO  | Task 7c0cfeff-5f53-4e00-8a2f-3700890c45c2 is in state STARTED 2026-01-17 01:03:54.535611 | orchestrator | 2026-01-17 01:03:54 | INFO  | Task 47dc7220-be13-4f38-9e9d-2b9aa2a6c798 is in state STARTED 2026-01-17 01:03:54.537792 | orchestrator | 2026-01-17 01:03:54 | INFO  | Task 29e001bb-6e11-4a45-8c2e-826bccfc99aa is in state STARTED 2026-01-17 01:03:54.537836 | orchestrator | 2026-01-17 01:03:54 | INFO  | Wait 1 second(s) until the next check 2026-01-17 01:03:57.572580 | orchestrator | 2026-01-17 01:03:57 | INFO  | Task f8c4b6e6-3f8b-436e-a1c0-08373c0c22cd is in state STARTED 2026-01-17 01:03:57.572815 | orchestrator | 2026-01-17 01:03:57 | INFO  | Task 81129f70-4b45-4ffd-ac06-457ca49d1734 is in state STARTED 2026-01-17 01:03:57.573662 | orchestrator | 2026-01-17 01:03:57 | INFO  | Task 7c0cfeff-5f53-4e00-8a2f-3700890c45c2 is in state STARTED 2026-01-17 01:03:57.574309 | orchestrator | 2026-01-17 01:03:57 | INFO  | Task 47dc7220-be13-4f38-9e9d-2b9aa2a6c798 is in state STARTED 2026-01-17 01:03:57.575221 | orchestrator | 2026-01-17 01:03:57 | INFO  | Task 29e001bb-6e11-4a45-8c2e-826bccfc99aa is in state STARTED 2026-01-17 01:03:57.575264 | orchestrator | 2026-01-17 01:03:57 | INFO  | Wait 1 second(s) until the next check 2026-01-17 01:04:00.604303 | orchestrator | 2026-01-17 01:04:00 | INFO  | Task f8c4b6e6-3f8b-436e-a1c0-08373c0c22cd is in state STARTED 2026-01-17 01:04:00.604683 | orchestrator | 2026-01-17 01:04:00 | INFO  | Task 81129f70-4b45-4ffd-ac06-457ca49d1734 is in state STARTED 2026-01-17 01:04:00.605277 | orchestrator | 2026-01-17 01:04:00 | INFO  | Task 7c0cfeff-5f53-4e00-8a2f-3700890c45c2 is in state STARTED 2026-01-17 01:04:00.605963 | orchestrator | 2026-01-17 01:04:00 | INFO  | Task 47dc7220-be13-4f38-9e9d-2b9aa2a6c798 is in state STARTED 2026-01-17 01:04:00.606716 | orchestrator | 2026-01-17 01:04:00 | INFO  | Task 29e001bb-6e11-4a45-8c2e-826bccfc99aa is in state STARTED 2026-01-17 01:04:00.606741 | orchestrator | 2026-01-17 01:04:00 | INFO  | Wait 1 second(s) until the next check 2026-01-17 01:04:03.647703 | orchestrator | 2026-01-17 01:04:03 | INFO  | Task f8c4b6e6-3f8b-436e-a1c0-08373c0c22cd is in state STARTED 2026-01-17 01:04:03.648009 | orchestrator | 2026-01-17 01:04:03 | INFO  | Task 81129f70-4b45-4ffd-ac06-457ca49d1734 is in state STARTED 2026-01-17 01:04:03.648744 | orchestrator | 2026-01-17 01:04:03 | INFO  | Task 7c0cfeff-5f53-4e00-8a2f-3700890c45c2 is in state STARTED 2026-01-17 01:04:03.649735 | orchestrator | 2026-01-17 01:04:03 | INFO  | Task 47dc7220-be13-4f38-9e9d-2b9aa2a6c798 is in state STARTED 2026-01-17 01:04:03.649935 | orchestrator | 2026-01-17 01:04:03 | INFO  | Task 29e001bb-6e11-4a45-8c2e-826bccfc99aa is in state STARTED 2026-01-17 01:04:03.650049 | orchestrator | 2026-01-17 01:04:03 | INFO  | Wait 1 second(s) until the next check 2026-01-17 01:04:06.676934 | orchestrator | 2026-01-17 01:04:06 | INFO  | Task f8c4b6e6-3f8b-436e-a1c0-08373c0c22cd is in state STARTED 2026-01-17 01:04:06.677074 | orchestrator | 2026-01-17 01:04:06 | INFO  | Task 81129f70-4b45-4ffd-ac06-457ca49d1734 is in state STARTED 2026-01-17 01:04:06.677714 | orchestrator | 2026-01-17 01:04:06 | INFO  | Task 7c0cfeff-5f53-4e00-8a2f-3700890c45c2 is in state STARTED 2026-01-17 01:04:06.678160 | orchestrator | 2026-01-17 01:04:06 | INFO  | Task 47dc7220-be13-4f38-9e9d-2b9aa2a6c798 is in state STARTED 2026-01-17 01:04:06.678661 | orchestrator | 2026-01-17 01:04:06 | INFO  | Task 29e001bb-6e11-4a45-8c2e-826bccfc99aa is in state STARTED 2026-01-17 01:04:06.678688 | orchestrator | 2026-01-17 01:04:06 | INFO  | Wait 1 second(s) until the next check 2026-01-17 01:04:09.712143 | orchestrator | 2026-01-17 01:04:09 | INFO  | Task f8c4b6e6-3f8b-436e-a1c0-08373c0c22cd is in state STARTED 2026-01-17 01:04:09.712256 | orchestrator | 2026-01-17 01:04:09 | INFO  | Task 81129f70-4b45-4ffd-ac06-457ca49d1734 is in state STARTED 2026-01-17 01:04:09.712831 | orchestrator | 2026-01-17 01:04:09 | INFO  | Task 7c0cfeff-5f53-4e00-8a2f-3700890c45c2 is in state STARTED 2026-01-17 01:04:09.713327 | orchestrator | 2026-01-17 01:04:09 | INFO  | Task 47dc7220-be13-4f38-9e9d-2b9aa2a6c798 is in state STARTED 2026-01-17 01:04:09.714904 | orchestrator | 2026-01-17 01:04:09 | INFO  | Task 29e001bb-6e11-4a45-8c2e-826bccfc99aa is in state STARTED 2026-01-17 01:04:09.714957 | orchestrator | 2026-01-17 01:04:09 | INFO  | Wait 1 second(s) until the next check 2026-01-17 01:04:12.741124 | orchestrator | 2026-01-17 01:04:12 | INFO  | Task f8c4b6e6-3f8b-436e-a1c0-08373c0c22cd is in state STARTED 2026-01-17 01:04:12.741559 | orchestrator | 2026-01-17 01:04:12 | INFO  | Task 81129f70-4b45-4ffd-ac06-457ca49d1734 is in state STARTED 2026-01-17 01:04:12.742154 | orchestrator | 2026-01-17 01:04:12 | INFO  | Task 7c0cfeff-5f53-4e00-8a2f-3700890c45c2 is in state STARTED 2026-01-17 01:04:12.743038 | orchestrator | 2026-01-17 01:04:12 | INFO  | Task 47dc7220-be13-4f38-9e9d-2b9aa2a6c798 is in state STARTED 2026-01-17 01:04:12.744340 | orchestrator | 2026-01-17 01:04:12 | INFO  | Task 29e001bb-6e11-4a45-8c2e-826bccfc99aa is in state STARTED 2026-01-17 01:04:12.744838 | orchestrator | 2026-01-17 01:04:12 | INFO  | Wait 1 second(s) until the next check 2026-01-17 01:04:15.783978 | orchestrator | 2026-01-17 01:04:15 | INFO  | Task f8c4b6e6-3f8b-436e-a1c0-08373c0c22cd is in state STARTED 2026-01-17 01:04:15.784597 | orchestrator | 2026-01-17 01:04:15 | INFO  | Task 81129f70-4b45-4ffd-ac06-457ca49d1734 is in state STARTED 2026-01-17 01:04:15.787876 | orchestrator | 2026-01-17 01:04:15 | INFO  | Task 7c0cfeff-5f53-4e00-8a2f-3700890c45c2 is in state STARTED 2026-01-17 01:04:15.789494 | orchestrator | 2026-01-17 01:04:15 | INFO  | Task 47dc7220-be13-4f38-9e9d-2b9aa2a6c798 is in state STARTED 2026-01-17 01:04:15.790303 | orchestrator | 2026-01-17 01:04:15 | INFO  | Task 29e001bb-6e11-4a45-8c2e-826bccfc99aa is in state STARTED 2026-01-17 01:04:15.790640 | orchestrator | 2026-01-17 01:04:15 | INFO  | Wait 1 second(s) until the next check 2026-01-17 01:04:18.829888 | orchestrator | 2026-01-17 01:04:18 | INFO  | Task f8c4b6e6-3f8b-436e-a1c0-08373c0c22cd is in state STARTED 2026-01-17 01:04:18.830432 | orchestrator | 2026-01-17 01:04:18 | INFO  | Task 81129f70-4b45-4ffd-ac06-457ca49d1734 is in state STARTED 2026-01-17 01:04:18.831259 | orchestrator | 2026-01-17 01:04:18 | INFO  | Task 7c0cfeff-5f53-4e00-8a2f-3700890c45c2 is in state STARTED 2026-01-17 01:04:18.833402 | orchestrator | 2026-01-17 01:04:18 | INFO  | Task 47dc7220-be13-4f38-9e9d-2b9aa2a6c798 is in state STARTED 2026-01-17 01:04:18.834224 | orchestrator | 2026-01-17 01:04:18 | INFO  | Task 29e001bb-6e11-4a45-8c2e-826bccfc99aa is in state STARTED 2026-01-17 01:04:18.834245 | orchestrator | 2026-01-17 01:04:18 | INFO  | Wait 1 second(s) until the next check 2026-01-17 01:04:21.875197 | orchestrator | 2026-01-17 01:04:21 | INFO  | Task f8c4b6e6-3f8b-436e-a1c0-08373c0c22cd is in state STARTED 2026-01-17 01:04:21.875406 | orchestrator | 2026-01-17 01:04:21 | INFO  | Task 81129f70-4b45-4ffd-ac06-457ca49d1734 is in state STARTED 2026-01-17 01:04:21.876392 | orchestrator | 2026-01-17 01:04:21 | INFO  | Task 7c0cfeff-5f53-4e00-8a2f-3700890c45c2 is in state STARTED 2026-01-17 01:04:21.878210 | orchestrator | 2026-01-17 01:04:21 | INFO  | Task 47dc7220-be13-4f38-9e9d-2b9aa2a6c798 is in state STARTED 2026-01-17 01:04:21.878247 | orchestrator | 2026-01-17 01:04:21 | INFO  | Task 29e001bb-6e11-4a45-8c2e-826bccfc99aa is in state STARTED 2026-01-17 01:04:21.878256 | orchestrator | 2026-01-17 01:04:21 | INFO  | Wait 1 second(s) until the next check 2026-01-17 01:04:24.916188 | orchestrator | 2026-01-17 01:04:24 | INFO  | Task f8c4b6e6-3f8b-436e-a1c0-08373c0c22cd is in state STARTED 2026-01-17 01:04:24.916680 | orchestrator | 2026-01-17 01:04:24 | INFO  | Task 81129f70-4b45-4ffd-ac06-457ca49d1734 is in state STARTED 2026-01-17 01:04:24.918099 | orchestrator | 2026-01-17 01:04:24 | INFO  | Task 7c0cfeff-5f53-4e00-8a2f-3700890c45c2 is in state STARTED 2026-01-17 01:04:24.919359 | orchestrator | 2026-01-17 01:04:24 | INFO  | Task 47dc7220-be13-4f38-9e9d-2b9aa2a6c798 is in state STARTED 2026-01-17 01:04:24.919997 | orchestrator | 2026-01-17 01:04:24 | INFO  | Task 29e001bb-6e11-4a45-8c2e-826bccfc99aa is in state STARTED 2026-01-17 01:04:24.920086 | orchestrator | 2026-01-17 01:04:24 | INFO  | Wait 1 second(s) until the next check 2026-01-17 01:04:27.955923 | orchestrator | 2026-01-17 01:04:27 | INFO  | Task f8c4b6e6-3f8b-436e-a1c0-08373c0c22cd is in state STARTED 2026-01-17 01:04:27.958529 | orchestrator | 2026-01-17 01:04:27 | INFO  | Task 81129f70-4b45-4ffd-ac06-457ca49d1734 is in state STARTED 2026-01-17 01:04:27.960398 | orchestrator | 2026-01-17 01:04:27 | INFO  | Task 7c0cfeff-5f53-4e00-8a2f-3700890c45c2 is in state STARTED 2026-01-17 01:04:27.961231 | orchestrator | 2026-01-17 01:04:27 | INFO  | Task 47dc7220-be13-4f38-9e9d-2b9aa2a6c798 is in state STARTED 2026-01-17 01:04:27.985402 | orchestrator | 2026-01-17 01:04:27 | INFO  | Task 29e001bb-6e11-4a45-8c2e-826bccfc99aa is in state STARTED 2026-01-17 01:04:27.985501 | orchestrator | 2026-01-17 01:04:27 | INFO  | Wait 1 second(s) until the next check 2026-01-17 01:04:31.027366 | orchestrator | 2026-01-17 01:04:31 | INFO  | Task f8c4b6e6-3f8b-436e-a1c0-08373c0c22cd is in state STARTED 2026-01-17 01:04:31.027562 | orchestrator | 2026-01-17 01:04:31 | INFO  | Task 81129f70-4b45-4ffd-ac06-457ca49d1734 is in state STARTED 2026-01-17 01:04:31.028582 | orchestrator | 2026-01-17 01:04:31 | INFO  | Task 7c0cfeff-5f53-4e00-8a2f-3700890c45c2 is in state STARTED 2026-01-17 01:04:31.029525 | orchestrator | 2026-01-17 01:04:31 | INFO  | Task 47dc7220-be13-4f38-9e9d-2b9aa2a6c798 is in state STARTED 2026-01-17 01:04:31.030413 | orchestrator | 2026-01-17 01:04:31 | INFO  | Task 29e001bb-6e11-4a45-8c2e-826bccfc99aa is in state STARTED 2026-01-17 01:04:31.030480 | orchestrator | 2026-01-17 01:04:31 | INFO  | Wait 1 second(s) until the next check 2026-01-17 01:04:34.071814 | orchestrator | 2026-01-17 01:04:34 | INFO  | Task f8c4b6e6-3f8b-436e-a1c0-08373c0c22cd is in state STARTED 2026-01-17 01:04:34.071943 | orchestrator | 2026-01-17 01:04:34 | INFO  | Task 81129f70-4b45-4ffd-ac06-457ca49d1734 is in state SUCCESS 2026-01-17 01:04:34.072484 | orchestrator | 2026-01-17 01:04:34.072543 | orchestrator | 2026-01-17 01:04:34.072565 | orchestrator | PLAY [Apply role cephclient] *************************************************** 2026-01-17 01:04:34.072573 | orchestrator | 2026-01-17 01:04:34.072580 | orchestrator | TASK [osism.services.cephclient : Include container tasks] ********************* 2026-01-17 01:04:34.072586 | orchestrator | Saturday 17 January 2026 01:02:00 +0000 (0:00:00.246) 0:00:00.246 ****** 2026-01-17 01:04:34.072594 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/cephclient/tasks/container.yml for testbed-manager 2026-01-17 01:04:34.072602 | orchestrator | 2026-01-17 01:04:34.072609 | orchestrator | TASK [osism.services.cephclient : Create required directories] ***************** 2026-01-17 01:04:34.072616 | orchestrator | Saturday 17 January 2026 01:02:00 +0000 (0:00:00.241) 0:00:00.488 ****** 2026-01-17 01:04:34.072621 | orchestrator | changed: [testbed-manager] => (item=/opt/cephclient/configuration) 2026-01-17 01:04:34.072625 | orchestrator | changed: [testbed-manager] => (item=/opt/cephclient/data) 2026-01-17 01:04:34.072630 | orchestrator | ok: [testbed-manager] => (item=/opt/cephclient) 2026-01-17 01:04:34.072634 | orchestrator | 2026-01-17 01:04:34.072638 | orchestrator | TASK [osism.services.cephclient : Copy configuration files] ******************** 2026-01-17 01:04:34.072642 | orchestrator | Saturday 17 January 2026 01:02:01 +0000 (0:00:01.329) 0:00:01.817 ****** 2026-01-17 01:04:34.072646 | orchestrator | changed: [testbed-manager] => (item={'src': 'ceph.conf.j2', 'dest': '/opt/cephclient/configuration/ceph.conf'}) 2026-01-17 01:04:34.072662 | orchestrator | 2026-01-17 01:04:34.072666 | orchestrator | TASK [osism.services.cephclient : Copy keyring file] *************************** 2026-01-17 01:04:34.072670 | orchestrator | Saturday 17 January 2026 01:02:03 +0000 (0:00:01.501) 0:00:03.319 ****** 2026-01-17 01:04:34.072674 | orchestrator | changed: [testbed-manager] 2026-01-17 01:04:34.072678 | orchestrator | 2026-01-17 01:04:34.072682 | orchestrator | TASK [osism.services.cephclient : Copy docker-compose.yml file] **************** 2026-01-17 01:04:34.072686 | orchestrator | Saturday 17 January 2026 01:02:04 +0000 (0:00:00.965) 0:00:04.285 ****** 2026-01-17 01:04:34.072690 | orchestrator | changed: [testbed-manager] 2026-01-17 01:04:34.072693 | orchestrator | 2026-01-17 01:04:34.072697 | orchestrator | TASK [osism.services.cephclient : Manage cephclient service] ******************* 2026-01-17 01:04:34.072701 | orchestrator | Saturday 17 January 2026 01:02:05 +0000 (0:00:00.944) 0:00:05.229 ****** 2026-01-17 01:04:34.072705 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage cephclient service (10 retries left). 2026-01-17 01:04:34.072709 | orchestrator | ok: [testbed-manager] 2026-01-17 01:04:34.072713 | orchestrator | 2026-01-17 01:04:34.072717 | orchestrator | TASK [osism.services.cephclient : Copy wrapper scripts] ************************ 2026-01-17 01:04:34.072720 | orchestrator | Saturday 17 January 2026 01:02:47 +0000 (0:00:41.997) 0:00:47.226 ****** 2026-01-17 01:04:34.072724 | orchestrator | changed: [testbed-manager] => (item=ceph) 2026-01-17 01:04:34.072728 | orchestrator | changed: [testbed-manager] => (item=ceph-authtool) 2026-01-17 01:04:34.072732 | orchestrator | changed: [testbed-manager] => (item=rados) 2026-01-17 01:04:34.072736 | orchestrator | changed: [testbed-manager] => (item=radosgw-admin) 2026-01-17 01:04:34.072740 | orchestrator | changed: [testbed-manager] => (item=rbd) 2026-01-17 01:04:34.072744 | orchestrator | 2026-01-17 01:04:34.072748 | orchestrator | TASK [osism.services.cephclient : Remove old wrapper scripts] ****************** 2026-01-17 01:04:34.072752 | orchestrator | Saturday 17 January 2026 01:02:50 +0000 (0:00:03.345) 0:00:50.572 ****** 2026-01-17 01:04:34.072755 | orchestrator | ok: [testbed-manager] => (item=crushtool) 2026-01-17 01:04:34.072759 | orchestrator | 2026-01-17 01:04:34.072763 | orchestrator | TASK [osism.services.cephclient : Include package tasks] *********************** 2026-01-17 01:04:34.072767 | orchestrator | Saturday 17 January 2026 01:02:50 +0000 (0:00:00.430) 0:00:51.003 ****** 2026-01-17 01:04:34.072771 | orchestrator | skipping: [testbed-manager] 2026-01-17 01:04:34.072774 | orchestrator | 2026-01-17 01:04:34.072778 | orchestrator | TASK [osism.services.cephclient : Include rook task] *************************** 2026-01-17 01:04:34.072784 | orchestrator | Saturday 17 January 2026 01:02:50 +0000 (0:00:00.119) 0:00:51.122 ****** 2026-01-17 01:04:34.072790 | orchestrator | skipping: [testbed-manager] 2026-01-17 01:04:34.072796 | orchestrator | 2026-01-17 01:04:34.072846 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Restart cephclient service] ******* 2026-01-17 01:04:34.072854 | orchestrator | Saturday 17 January 2026 01:02:51 +0000 (0:00:00.518) 0:00:51.641 ****** 2026-01-17 01:04:34.072861 | orchestrator | changed: [testbed-manager] 2026-01-17 01:04:34.072867 | orchestrator | 2026-01-17 01:04:34.072895 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Ensure that all containers are up] *** 2026-01-17 01:04:34.072921 | orchestrator | Saturday 17 January 2026 01:02:53 +0000 (0:00:01.581) 0:00:53.222 ****** 2026-01-17 01:04:34.072930 | orchestrator | changed: [testbed-manager] 2026-01-17 01:04:34.072936 | orchestrator | 2026-01-17 01:04:34.072943 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Wait for an healthy service] ****** 2026-01-17 01:04:34.072949 | orchestrator | Saturday 17 January 2026 01:02:53 +0000 (0:00:00.766) 0:00:53.988 ****** 2026-01-17 01:04:34.072956 | orchestrator | changed: [testbed-manager] 2026-01-17 01:04:34.072963 | orchestrator | 2026-01-17 01:04:34.072969 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Copy bash completion scripts] ***** 2026-01-17 01:04:34.072975 | orchestrator | Saturday 17 January 2026 01:02:54 +0000 (0:00:00.653) 0:00:54.641 ****** 2026-01-17 01:04:34.072982 | orchestrator | ok: [testbed-manager] => (item=ceph) 2026-01-17 01:04:34.072988 | orchestrator | ok: [testbed-manager] => (item=rados) 2026-01-17 01:04:34.073003 | orchestrator | ok: [testbed-manager] => (item=radosgw-admin) 2026-01-17 01:04:34.073010 | orchestrator | ok: [testbed-manager] => (item=rbd) 2026-01-17 01:04:34.073017 | orchestrator | 2026-01-17 01:04:34.073023 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-17 01:04:34.073030 | orchestrator | testbed-manager : ok=12  changed=8  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-17 01:04:34.073037 | orchestrator | 2026-01-17 01:04:34.073044 | orchestrator | 2026-01-17 01:04:34.073061 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-17 01:04:34.073072 | orchestrator | Saturday 17 January 2026 01:02:56 +0000 (0:00:01.598) 0:00:56.240 ****** 2026-01-17 01:04:34.073079 | orchestrator | =============================================================================== 2026-01-17 01:04:34.073086 | orchestrator | osism.services.cephclient : Manage cephclient service ------------------ 42.00s 2026-01-17 01:04:34.073093 | orchestrator | osism.services.cephclient : Copy wrapper scripts ------------------------ 3.35s 2026-01-17 01:04:34.073100 | orchestrator | osism.services.cephclient : Copy bash completion scripts ---------------- 1.60s 2026-01-17 01:04:34.073107 | orchestrator | osism.services.cephclient : Restart cephclient service ------------------ 1.58s 2026-01-17 01:04:34.073114 | orchestrator | osism.services.cephclient : Copy configuration files -------------------- 1.50s 2026-01-17 01:04:34.073121 | orchestrator | osism.services.cephclient : Create required directories ----------------- 1.33s 2026-01-17 01:04:34.073127 | orchestrator | osism.services.cephclient : Copy keyring file --------------------------- 0.97s 2026-01-17 01:04:34.073131 | orchestrator | osism.services.cephclient : Copy docker-compose.yml file ---------------- 0.94s 2026-01-17 01:04:34.073136 | orchestrator | osism.services.cephclient : Ensure that all containers are up ----------- 0.77s 2026-01-17 01:04:34.073141 | orchestrator | osism.services.cephclient : Wait for an healthy service ----------------- 0.65s 2026-01-17 01:04:34.073148 | orchestrator | osism.services.cephclient : Include rook task --------------------------- 0.52s 2026-01-17 01:04:34.073155 | orchestrator | osism.services.cephclient : Remove old wrapper scripts ------------------ 0.43s 2026-01-17 01:04:34.073162 | orchestrator | osism.services.cephclient : Include container tasks --------------------- 0.24s 2026-01-17 01:04:34.073169 | orchestrator | osism.services.cephclient : Include package tasks ----------------------- 0.12s 2026-01-17 01:04:34.073176 | orchestrator | 2026-01-17 01:04:34.073182 | orchestrator | 2026-01-17 01:04:34.073189 | orchestrator | PLAY [Download ironic ipa images] ********************************************** 2026-01-17 01:04:34.073196 | orchestrator | 2026-01-17 01:04:34.073204 | orchestrator | TASK [Ensure the destination directory exists] ********************************* 2026-01-17 01:04:34.073210 | orchestrator | Saturday 17 January 2026 01:02:46 +0000 (0:00:00.120) 0:00:00.120 ****** 2026-01-17 01:04:34.073218 | orchestrator | changed: [localhost] 2026-01-17 01:04:34.073252 | orchestrator | 2026-01-17 01:04:34.073261 | orchestrator | TASK [Download ironic-agent initramfs] ***************************************** 2026-01-17 01:04:34.073268 | orchestrator | Saturday 17 January 2026 01:02:47 +0000 (0:00:01.182) 0:00:01.303 ****** 2026-01-17 01:04:34.073274 | orchestrator | changed: [localhost] 2026-01-17 01:04:34.073278 | orchestrator | 2026-01-17 01:04:34.073283 | orchestrator | TASK [Download ironic-agent kernel] ******************************************** 2026-01-17 01:04:34.073287 | orchestrator | Saturday 17 January 2026 01:03:16 +0000 (0:00:28.783) 0:00:30.086 ****** 2026-01-17 01:04:34.073292 | orchestrator | FAILED - RETRYING: [localhost]: Download ironic-agent kernel (3 retries left). 2026-01-17 01:04:34.073296 | orchestrator | FAILED - RETRYING: [localhost]: Download ironic-agent kernel (2 retries left). 2026-01-17 01:04:34.073302 | orchestrator | FAILED - RETRYING: [localhost]: Download ironic-agent kernel (1 retries left). 2026-01-17 01:04:34.073315 | orchestrator | changed: [localhost] 2026-01-17 01:04:34.073322 | orchestrator | 2026-01-17 01:04:34.073329 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-17 01:04:34.073336 | orchestrator | 2026-01-17 01:04:34.073343 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-01-17 01:04:34.073356 | orchestrator | Saturday 17 January 2026 01:04:29 +0000 (0:01:13.315) 0:01:43.402 ****** 2026-01-17 01:04:34.073362 | orchestrator | ok: [testbed-node-0] 2026-01-17 01:04:34.073370 | orchestrator | ok: [testbed-node-1] 2026-01-17 01:04:34.073377 | orchestrator | ok: [testbed-node-2] 2026-01-17 01:04:34.073384 | orchestrator | 2026-01-17 01:04:34.073391 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-17 01:04:34.073398 | orchestrator | Saturday 17 January 2026 01:04:30 +0000 (0:00:00.943) 0:01:44.346 ****** 2026-01-17 01:04:34.073405 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: enable_ironic_True 2026-01-17 01:04:34.073411 | orchestrator | ok: [testbed-node-0] => (item=enable_ironic_False) 2026-01-17 01:04:34.073418 | orchestrator | ok: [testbed-node-1] => (item=enable_ironic_False) 2026-01-17 01:04:34.073439 | orchestrator | ok: [testbed-node-2] => (item=enable_ironic_False) 2026-01-17 01:04:34.073446 | orchestrator | 2026-01-17 01:04:34.073453 | orchestrator | PLAY [Apply role ironic] ******************************************************* 2026-01-17 01:04:34.073460 | orchestrator | skipping: no hosts matched 2026-01-17 01:04:34.073466 | orchestrator | 2026-01-17 01:04:34.073473 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-17 01:04:34.073481 | orchestrator | localhost : ok=3  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-17 01:04:34.073488 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-17 01:04:34.073496 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-17 01:04:34.073504 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-17 01:04:34.073510 | orchestrator | 2026-01-17 01:04:34.073517 | orchestrator | 2026-01-17 01:04:34.073524 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-17 01:04:34.073530 | orchestrator | Saturday 17 January 2026 01:04:31 +0000 (0:00:01.401) 0:01:45.748 ****** 2026-01-17 01:04:34.073537 | orchestrator | =============================================================================== 2026-01-17 01:04:34.073550 | orchestrator | Download ironic-agent kernel ------------------------------------------- 73.32s 2026-01-17 01:04:34.073560 | orchestrator | Download ironic-agent initramfs ---------------------------------------- 28.78s 2026-01-17 01:04:34.073566 | orchestrator | Group hosts based on enabled services ----------------------------------- 1.40s 2026-01-17 01:04:34.073573 | orchestrator | Ensure the destination directory exists --------------------------------- 1.18s 2026-01-17 01:04:34.073579 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.94s 2026-01-17 01:04:34.073660 | orchestrator | 2026-01-17 01:04:34 | INFO  | Task 7c0cfeff-5f53-4e00-8a2f-3700890c45c2 is in state STARTED 2026-01-17 01:04:34.073671 | orchestrator | 2026-01-17 01:04:34 | INFO  | Task 65437b38-f15f-48c3-8b3b-c3322790da8e is in state STARTED 2026-01-17 01:04:34.073940 | orchestrator | 2026-01-17 01:04:34 | INFO  | Task 47dc7220-be13-4f38-9e9d-2b9aa2a6c798 is in state STARTED 2026-01-17 01:04:34.074520 | orchestrator | 2026-01-17 01:04:34 | INFO  | Task 29e001bb-6e11-4a45-8c2e-826bccfc99aa is in state STARTED 2026-01-17 01:04:34.074542 | orchestrator | 2026-01-17 01:04:34 | INFO  | Wait 1 second(s) until the next check 2026-01-17 01:04:37.095144 | orchestrator | 2026-01-17 01:04:37 | INFO  | Task f8c4b6e6-3f8b-436e-a1c0-08373c0c22cd is in state STARTED 2026-01-17 01:04:37.095745 | orchestrator | 2026-01-17 01:04:37 | INFO  | Task 7c0cfeff-5f53-4e00-8a2f-3700890c45c2 is in state STARTED 2026-01-17 01:04:37.096600 | orchestrator | 2026-01-17 01:04:37 | INFO  | Task 65437b38-f15f-48c3-8b3b-c3322790da8e is in state STARTED 2026-01-17 01:04:37.097764 | orchestrator | 2026-01-17 01:04:37 | INFO  | Task 47dc7220-be13-4f38-9e9d-2b9aa2a6c798 is in state STARTED 2026-01-17 01:04:37.098525 | orchestrator | 2026-01-17 01:04:37 | INFO  | Task 29e001bb-6e11-4a45-8c2e-826bccfc99aa is in state STARTED 2026-01-17 01:04:37.098555 | orchestrator | 2026-01-17 01:04:37 | INFO  | Wait 1 second(s) until the next check 2026-01-17 01:04:40.121928 | orchestrator | 2026-01-17 01:04:40 | INFO  | Task f8c4b6e6-3f8b-436e-a1c0-08373c0c22cd is in state STARTED 2026-01-17 01:04:40.122273 | orchestrator | 2026-01-17 01:04:40 | INFO  | Task 7c0cfeff-5f53-4e00-8a2f-3700890c45c2 is in state STARTED 2026-01-17 01:04:40.122980 | orchestrator | 2026-01-17 01:04:40 | INFO  | Task 65437b38-f15f-48c3-8b3b-c3322790da8e is in state STARTED 2026-01-17 01:04:40.123512 | orchestrator | 2026-01-17 01:04:40 | INFO  | Task 47dc7220-be13-4f38-9e9d-2b9aa2a6c798 is in state SUCCESS 2026-01-17 01:04:40.124194 | orchestrator | 2026-01-17 01:04:40 | INFO  | Task 29e001bb-6e11-4a45-8c2e-826bccfc99aa is in state STARTED 2026-01-17 01:04:40.124219 | orchestrator | 2026-01-17 01:04:40 | INFO  | Wait 1 second(s) until the next check 2026-01-17 01:04:43.161339 | orchestrator | 2026-01-17 01:04:43 | INFO  | Task f8c4b6e6-3f8b-436e-a1c0-08373c0c22cd is in state STARTED 2026-01-17 01:04:43.161757 | orchestrator | 2026-01-17 01:04:43 | INFO  | Task 7c0cfeff-5f53-4e00-8a2f-3700890c45c2 is in state STARTED 2026-01-17 01:04:43.163301 | orchestrator | 2026-01-17 01:04:43 | INFO  | Task 65437b38-f15f-48c3-8b3b-c3322790da8e is in state STARTED 2026-01-17 01:04:43.164014 | orchestrator | 2026-01-17 01:04:43 | INFO  | Task 29e001bb-6e11-4a45-8c2e-826bccfc99aa is in state STARTED 2026-01-17 01:04:43.164038 | orchestrator | 2026-01-17 01:04:43 | INFO  | Wait 1 second(s) until the next check 2026-01-17 01:04:46.197835 | orchestrator | 2026-01-17 01:04:46 | INFO  | Task f8c4b6e6-3f8b-436e-a1c0-08373c0c22cd is in state STARTED 2026-01-17 01:04:46.198382 | orchestrator | 2026-01-17 01:04:46 | INFO  | Task 7c0cfeff-5f53-4e00-8a2f-3700890c45c2 is in state STARTED 2026-01-17 01:04:46.200045 | orchestrator | 2026-01-17 01:04:46 | INFO  | Task 65437b38-f15f-48c3-8b3b-c3322790da8e is in state STARTED 2026-01-17 01:04:46.201441 | orchestrator | 2026-01-17 01:04:46 | INFO  | Task 29e001bb-6e11-4a45-8c2e-826bccfc99aa is in state STARTED 2026-01-17 01:04:46.201516 | orchestrator | 2026-01-17 01:04:46 | INFO  | Wait 1 second(s) until the next check 2026-01-17 01:04:49.238010 | orchestrator | 2026-01-17 01:04:49 | INFO  | Task f8c4b6e6-3f8b-436e-a1c0-08373c0c22cd is in state STARTED 2026-01-17 01:04:49.238506 | orchestrator | 2026-01-17 01:04:49 | INFO  | Task 7c0cfeff-5f53-4e00-8a2f-3700890c45c2 is in state STARTED 2026-01-17 01:04:49.239675 | orchestrator | 2026-01-17 01:04:49 | INFO  | Task 65437b38-f15f-48c3-8b3b-c3322790da8e is in state STARTED 2026-01-17 01:04:49.240825 | orchestrator | 2026-01-17 01:04:49 | INFO  | Task 29e001bb-6e11-4a45-8c2e-826bccfc99aa is in state STARTED 2026-01-17 01:04:49.240898 | orchestrator | 2026-01-17 01:04:49 | INFO  | Wait 1 second(s) until the next check 2026-01-17 01:04:52.280494 | orchestrator | 2026-01-17 01:04:52 | INFO  | Task f8c4b6e6-3f8b-436e-a1c0-08373c0c22cd is in state STARTED 2026-01-17 01:04:52.281362 | orchestrator | 2026-01-17 01:04:52 | INFO  | Task 7c0cfeff-5f53-4e00-8a2f-3700890c45c2 is in state STARTED 2026-01-17 01:04:52.282528 | orchestrator | 2026-01-17 01:04:52 | INFO  | Task 65437b38-f15f-48c3-8b3b-c3322790da8e is in state STARTED 2026-01-17 01:04:52.283438 | orchestrator | 2026-01-17 01:04:52 | INFO  | Task 29e001bb-6e11-4a45-8c2e-826bccfc99aa is in state STARTED 2026-01-17 01:04:52.283673 | orchestrator | 2026-01-17 01:04:52 | INFO  | Wait 1 second(s) until the next check 2026-01-17 01:04:55.323236 | orchestrator | 2026-01-17 01:04:55 | INFO  | Task f8c4b6e6-3f8b-436e-a1c0-08373c0c22cd is in state STARTED 2026-01-17 01:04:55.323788 | orchestrator | 2026-01-17 01:04:55 | INFO  | Task 7c0cfeff-5f53-4e00-8a2f-3700890c45c2 is in state STARTED 2026-01-17 01:04:55.324698 | orchestrator | 2026-01-17 01:04:55 | INFO  | Task 65437b38-f15f-48c3-8b3b-c3322790da8e is in state STARTED 2026-01-17 01:04:55.325424 | orchestrator | 2026-01-17 01:04:55 | INFO  | Task 29e001bb-6e11-4a45-8c2e-826bccfc99aa is in state STARTED 2026-01-17 01:04:55.325567 | orchestrator | 2026-01-17 01:04:55 | INFO  | Wait 1 second(s) until the next check 2026-01-17 01:04:58.513911 | orchestrator | 2026-01-17 01:04:58 | INFO  | Task f8c4b6e6-3f8b-436e-a1c0-08373c0c22cd is in state STARTED 2026-01-17 01:04:58.513971 | orchestrator | 2026-01-17 01:04:58 | INFO  | Task 7c0cfeff-5f53-4e00-8a2f-3700890c45c2 is in state STARTED 2026-01-17 01:04:58.513981 | orchestrator | 2026-01-17 01:04:58 | INFO  | Task 65437b38-f15f-48c3-8b3b-c3322790da8e is in state STARTED 2026-01-17 01:04:58.513988 | orchestrator | 2026-01-17 01:04:58 | INFO  | Task 29e001bb-6e11-4a45-8c2e-826bccfc99aa is in state STARTED 2026-01-17 01:04:58.513995 | orchestrator | 2026-01-17 01:04:58 | INFO  | Wait 1 second(s) until the next check 2026-01-17 01:05:01.388863 | orchestrator | 2026-01-17 01:05:01 | INFO  | Task f8c4b6e6-3f8b-436e-a1c0-08373c0c22cd is in state STARTED 2026-01-17 01:05:01.389355 | orchestrator | 2026-01-17 01:05:01 | INFO  | Task 7c0cfeff-5f53-4e00-8a2f-3700890c45c2 is in state STARTED 2026-01-17 01:05:01.390047 | orchestrator | 2026-01-17 01:05:01 | INFO  | Task 65437b38-f15f-48c3-8b3b-c3322790da8e is in state STARTED 2026-01-17 01:05:01.391745 | orchestrator | 2026-01-17 01:05:01 | INFO  | Task 29e001bb-6e11-4a45-8c2e-826bccfc99aa is in state STARTED 2026-01-17 01:05:01.391778 | orchestrator | 2026-01-17 01:05:01 | INFO  | Wait 1 second(s) until the next check 2026-01-17 01:05:04.434098 | orchestrator | 2026-01-17 01:05:04 | INFO  | Task f8c4b6e6-3f8b-436e-a1c0-08373c0c22cd is in state SUCCESS 2026-01-17 01:05:04.435200 | orchestrator | 2026-01-17 01:05:04.435251 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-01-17 01:05:04.435260 | orchestrator | 2.16.14 2026-01-17 01:05:04.435269 | orchestrator | 2026-01-17 01:05:04.435276 | orchestrator | PLAY [Bootstraph ceph dashboard] *********************************************** 2026-01-17 01:05:04.435283 | orchestrator | 2026-01-17 01:05:04.435290 | orchestrator | TASK [Disable the ceph dashboard] ********************************************** 2026-01-17 01:05:04.435298 | orchestrator | Saturday 17 January 2026 01:03:00 +0000 (0:00:00.268) 0:00:00.269 ****** 2026-01-17 01:05:04.435311 | orchestrator | changed: [testbed-manager] 2026-01-17 01:05:04.435318 | orchestrator | 2026-01-17 01:05:04.435345 | orchestrator | TASK [Set mgr/dashboard/ssl to false] ****************************************** 2026-01-17 01:05:04.435353 | orchestrator | Saturday 17 January 2026 01:03:02 +0000 (0:00:01.758) 0:00:02.027 ****** 2026-01-17 01:05:04.435360 | orchestrator | changed: [testbed-manager] 2026-01-17 01:05:04.435367 | orchestrator | 2026-01-17 01:05:04.435373 | orchestrator | TASK [Set mgr/dashboard/server_port to 7000] *********************************** 2026-01-17 01:05:04.435392 | orchestrator | Saturday 17 January 2026 01:03:03 +0000 (0:00:01.157) 0:00:03.185 ****** 2026-01-17 01:05:04.435400 | orchestrator | changed: [testbed-manager] 2026-01-17 01:05:04.435423 | orchestrator | 2026-01-17 01:05:04.435430 | orchestrator | TASK [Set mgr/dashboard/server_addr to 0.0.0.0] ******************************** 2026-01-17 01:05:04.435437 | orchestrator | Saturday 17 January 2026 01:03:04 +0000 (0:00:01.074) 0:00:04.259 ****** 2026-01-17 01:05:04.435450 | orchestrator | changed: [testbed-manager] 2026-01-17 01:05:04.435457 | orchestrator | 2026-01-17 01:05:04.435500 | orchestrator | TASK [Set mgr/dashboard/standby_behaviour to error] **************************** 2026-01-17 01:05:04.435552 | orchestrator | Saturday 17 January 2026 01:03:06 +0000 (0:00:01.251) 0:00:05.511 ****** 2026-01-17 01:05:04.435561 | orchestrator | changed: [testbed-manager] 2026-01-17 01:05:04.435568 | orchestrator | 2026-01-17 01:05:04.435575 | orchestrator | TASK [Set mgr/dashboard/standby_error_status_code to 404] ********************** 2026-01-17 01:05:04.435582 | orchestrator | Saturday 17 January 2026 01:03:07 +0000 (0:00:01.165) 0:00:06.676 ****** 2026-01-17 01:05:04.435589 | orchestrator | changed: [testbed-manager] 2026-01-17 01:05:04.435596 | orchestrator | 2026-01-17 01:05:04.435603 | orchestrator | TASK [Enable the ceph dashboard] *********************************************** 2026-01-17 01:05:04.435611 | orchestrator | Saturday 17 January 2026 01:03:08 +0000 (0:00:01.116) 0:00:07.792 ****** 2026-01-17 01:05:04.435617 | orchestrator | changed: [testbed-manager] 2026-01-17 01:05:04.435625 | orchestrator | 2026-01-17 01:05:04.435632 | orchestrator | TASK [Write ceph_dashboard_password to temporary file] ************************* 2026-01-17 01:05:04.435639 | orchestrator | Saturday 17 January 2026 01:03:10 +0000 (0:00:02.104) 0:00:09.897 ****** 2026-01-17 01:05:04.435655 | orchestrator | changed: [testbed-manager] 2026-01-17 01:05:04.435662 | orchestrator | 2026-01-17 01:05:04.435669 | orchestrator | TASK [Create admin user] ******************************************************* 2026-01-17 01:05:04.435677 | orchestrator | Saturday 17 January 2026 01:03:11 +0000 (0:00:01.355) 0:00:11.253 ****** 2026-01-17 01:05:04.435684 | orchestrator | changed: [testbed-manager] 2026-01-17 01:05:04.435692 | orchestrator | 2026-01-17 01:05:04.435699 | orchestrator | TASK [Remove temporary file for ceph_dashboard_password] *********************** 2026-01-17 01:05:04.435706 | orchestrator | Saturday 17 January 2026 01:04:15 +0000 (0:01:03.691) 0:01:14.945 ****** 2026-01-17 01:05:04.435713 | orchestrator | skipping: [testbed-manager] 2026-01-17 01:05:04.435720 | orchestrator | 2026-01-17 01:05:04.435727 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2026-01-17 01:05:04.435735 | orchestrator | 2026-01-17 01:05:04.435742 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2026-01-17 01:05:04.435749 | orchestrator | Saturday 17 January 2026 01:04:15 +0000 (0:00:00.172) 0:01:15.117 ****** 2026-01-17 01:05:04.435756 | orchestrator | changed: [testbed-node-0] 2026-01-17 01:05:04.435763 | orchestrator | 2026-01-17 01:05:04.435770 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2026-01-17 01:05:04.435777 | orchestrator | 2026-01-17 01:05:04.435785 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2026-01-17 01:05:04.435792 | orchestrator | Saturday 17 January 2026 01:04:27 +0000 (0:00:11.664) 0:01:26.782 ****** 2026-01-17 01:05:04.435799 | orchestrator | changed: [testbed-node-1] 2026-01-17 01:05:04.435806 | orchestrator | 2026-01-17 01:05:04.435813 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2026-01-17 01:05:04.435820 | orchestrator | 2026-01-17 01:05:04.435827 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2026-01-17 01:05:04.435834 | orchestrator | Saturday 17 January 2026 01:04:38 +0000 (0:00:11.169) 0:01:37.951 ****** 2026-01-17 01:05:04.435842 | orchestrator | changed: [testbed-node-2] 2026-01-17 01:05:04.435849 | orchestrator | 2026-01-17 01:05:04.435856 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-17 01:05:04.435863 | orchestrator | testbed-manager : ok=9  changed=9  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-01-17 01:05:04.435870 | orchestrator | testbed-node-0 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-17 01:05:04.435886 | orchestrator | testbed-node-1 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-17 01:05:04.435894 | orchestrator | testbed-node-2 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-17 01:05:04.435900 | orchestrator | 2026-01-17 01:05:04.435912 | orchestrator | 2026-01-17 01:05:04.435919 | orchestrator | 2026-01-17 01:05:04.435925 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-17 01:05:04.435931 | orchestrator | Saturday 17 January 2026 01:04:39 +0000 (0:00:01.076) 0:01:39.028 ****** 2026-01-17 01:05:04.435937 | orchestrator | =============================================================================== 2026-01-17 01:05:04.435944 | orchestrator | Create admin user ------------------------------------------------------ 63.69s 2026-01-17 01:05:04.435961 | orchestrator | Restart ceph manager service ------------------------------------------- 23.91s 2026-01-17 01:05:04.435968 | orchestrator | Enable the ceph dashboard ----------------------------------------------- 2.10s 2026-01-17 01:05:04.435975 | orchestrator | Disable the ceph dashboard ---------------------------------------------- 1.76s 2026-01-17 01:05:04.435981 | orchestrator | Write ceph_dashboard_password to temporary file ------------------------- 1.36s 2026-01-17 01:05:04.435988 | orchestrator | Set mgr/dashboard/server_addr to 0.0.0.0 -------------------------------- 1.25s 2026-01-17 01:05:04.435994 | orchestrator | Set mgr/dashboard/standby_behaviour to error ---------------------------- 1.17s 2026-01-17 01:05:04.436001 | orchestrator | Set mgr/dashboard/ssl to false ------------------------------------------ 1.16s 2026-01-17 01:05:04.436008 | orchestrator | Set mgr/dashboard/standby_error_status_code to 404 ---------------------- 1.12s 2026-01-17 01:05:04.436015 | orchestrator | Set mgr/dashboard/server_port to 7000 ----------------------------------- 1.07s 2026-01-17 01:05:04.436021 | orchestrator | Remove temporary file for ceph_dashboard_password ----------------------- 0.17s 2026-01-17 01:05:04.436028 | orchestrator | 2026-01-17 01:05:04.436034 | orchestrator | 2026-01-17 01:05:04.436041 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-17 01:05:04.436048 | orchestrator | 2026-01-17 01:05:04.436055 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-01-17 01:05:04.436063 | orchestrator | Saturday 17 January 2026 01:02:46 +0000 (0:00:00.296) 0:00:00.296 ****** 2026-01-17 01:05:04.436070 | orchestrator | ok: [testbed-node-0] 2026-01-17 01:05:04.436077 | orchestrator | ok: [testbed-node-1] 2026-01-17 01:05:04.436084 | orchestrator | ok: [testbed-node-2] 2026-01-17 01:05:04.436092 | orchestrator | 2026-01-17 01:05:04.436105 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-17 01:05:04.436112 | orchestrator | Saturday 17 January 2026 01:02:46 +0000 (0:00:00.654) 0:00:00.951 ****** 2026-01-17 01:05:04.436119 | orchestrator | ok: [testbed-node-0] => (item=enable_barbican_True) 2026-01-17 01:05:04.436127 | orchestrator | ok: [testbed-node-1] => (item=enable_barbican_True) 2026-01-17 01:05:04.436133 | orchestrator | ok: [testbed-node-2] => (item=enable_barbican_True) 2026-01-17 01:05:04.436140 | orchestrator | 2026-01-17 01:05:04.436147 | orchestrator | PLAY [Apply role barbican] ***************************************************** 2026-01-17 01:05:04.436154 | orchestrator | 2026-01-17 01:05:04.436161 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2026-01-17 01:05:04.436168 | orchestrator | Saturday 17 January 2026 01:02:47 +0000 (0:00:00.519) 0:00:01.470 ****** 2026-01-17 01:05:04.436178 | orchestrator | included: /ansible/roles/barbican/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-17 01:05:04.436186 | orchestrator | 2026-01-17 01:05:04.436193 | orchestrator | TASK [service-ks-register : barbican | Creating services] ********************** 2026-01-17 01:05:04.436200 | orchestrator | Saturday 17 January 2026 01:02:47 +0000 (0:00:00.526) 0:00:01.997 ****** 2026-01-17 01:05:04.436207 | orchestrator | changed: [testbed-node-0] => (item=barbican (key-manager)) 2026-01-17 01:05:04.436214 | orchestrator | 2026-01-17 01:05:04.436220 | orchestrator | TASK [service-ks-register : barbican | Creating endpoints] ********************* 2026-01-17 01:05:04.436227 | orchestrator | Saturday 17 January 2026 01:02:51 +0000 (0:00:03.963) 0:00:05.960 ****** 2026-01-17 01:05:04.436234 | orchestrator | changed: [testbed-node-0] => (item=barbican -> https://api-int.testbed.osism.xyz:9311 -> internal) 2026-01-17 01:05:04.436241 | orchestrator | changed: [testbed-node-0] => (item=barbican -> https://api.testbed.osism.xyz:9311 -> public) 2026-01-17 01:05:04.436253 | orchestrator | 2026-01-17 01:05:04.436260 | orchestrator | TASK [service-ks-register : barbican | Creating projects] ********************** 2026-01-17 01:05:04.436267 | orchestrator | Saturday 17 January 2026 01:02:58 +0000 (0:00:06.865) 0:00:12.826 ****** 2026-01-17 01:05:04.436274 | orchestrator | changed: [testbed-node-0] => (item=service) 2026-01-17 01:05:04.436280 | orchestrator | 2026-01-17 01:05:04.436287 | orchestrator | TASK [service-ks-register : barbican | Creating users] ************************* 2026-01-17 01:05:04.436294 | orchestrator | Saturday 17 January 2026 01:03:02 +0000 (0:00:03.684) 0:00:16.510 ****** 2026-01-17 01:05:04.436301 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-01-17 01:05:04.436308 | orchestrator | changed: [testbed-node-0] => (item=barbican -> service) 2026-01-17 01:05:04.436314 | orchestrator | 2026-01-17 01:05:04.436321 | orchestrator | TASK [service-ks-register : barbican | Creating roles] ************************* 2026-01-17 01:05:04.436328 | orchestrator | Saturday 17 January 2026 01:03:07 +0000 (0:00:04.894) 0:00:21.405 ****** 2026-01-17 01:05:04.436335 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-01-17 01:05:04.436342 | orchestrator | changed: [testbed-node-0] => (item=key-manager:service-admin) 2026-01-17 01:05:04.436348 | orchestrator | changed: [testbed-node-0] => (item=creator) 2026-01-17 01:05:04.436355 | orchestrator | changed: [testbed-node-0] => (item=observer) 2026-01-17 01:05:04.436362 | orchestrator | changed: [testbed-node-0] => (item=audit) 2026-01-17 01:05:04.436369 | orchestrator | 2026-01-17 01:05:04.436376 | orchestrator | TASK [service-ks-register : barbican | Granting user roles] ******************** 2026-01-17 01:05:04.436439 | orchestrator | Saturday 17 January 2026 01:03:24 +0000 (0:00:16.817) 0:00:38.223 ****** 2026-01-17 01:05:04.436447 | orchestrator | changed: [testbed-node-0] => (item=barbican -> service -> admin) 2026-01-17 01:05:04.436454 | orchestrator | 2026-01-17 01:05:04.436461 | orchestrator | TASK [barbican : Ensuring config directories exist] **************************** 2026-01-17 01:05:04.436468 | orchestrator | Saturday 17 January 2026 01:03:28 +0000 (0:00:04.344) 0:00:42.567 ****** 2026-01-17 01:05:04.436484 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-01-17 01:05:04.436494 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-01-17 01:05:04.436505 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-01-17 01:05:04.436526 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-01-17 01:05:04.436534 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-01-17 01:05:04.436546 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-01-17 01:05:04.436554 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-01-17 01:05:04.436562 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-01-17 01:05:04.436576 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-01-17 01:05:04.436584 | orchestrator | 2026-01-17 01:05:04.436591 | orchestrator | TASK [barbican : Ensuring vassals config directories exist] ******************** 2026-01-17 01:05:04.436599 | orchestrator | Saturday 17 January 2026 01:03:31 +0000 (0:00:03.127) 0:00:45.695 ****** 2026-01-17 01:05:04.436606 | orchestrator | changed: [testbed-node-0] => (item=barbican-api/vassals) 2026-01-17 01:05:04.436613 | orchestrator | changed: [testbed-node-2] => (item=barbican-api/vassals) 2026-01-17 01:05:04.436620 | orchestrator | changed: [testbed-node-1] => (item=barbican-api/vassals) 2026-01-17 01:05:04.436628 | orchestrator | 2026-01-17 01:05:04.436635 | orchestrator | TASK [barbican : Check if policies shall be overwritten] *********************** 2026-01-17 01:05:04.436651 | orchestrator | Saturday 17 January 2026 01:03:33 +0000 (0:00:01.380) 0:00:47.075 ****** 2026-01-17 01:05:04.436659 | orchestrator | skipping: [testbed-node-0] 2026-01-17 01:05:04.436666 | orchestrator | 2026-01-17 01:05:04.436672 | orchestrator | TASK [barbican : Set barbican policy file] ************************************* 2026-01-17 01:05:04.436679 | orchestrator | Saturday 17 January 2026 01:03:33 +0000 (0:00:00.156) 0:00:47.232 ****** 2026-01-17 01:05:04.436686 | orchestrator | skipping: [testbed-node-0] 2026-01-17 01:05:04.436693 | orchestrator | skipping: [testbed-node-1] 2026-01-17 01:05:04.436700 | orchestrator | skipping: [testbed-node-2] 2026-01-17 01:05:04.436707 | orchestrator | 2026-01-17 01:05:04.436714 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2026-01-17 01:05:04.436757 | orchestrator | Saturday 17 January 2026 01:03:33 +0000 (0:00:00.468) 0:00:47.701 ****** 2026-01-17 01:05:04.436764 | orchestrator | included: /ansible/roles/barbican/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-17 01:05:04.436772 | orchestrator | 2026-01-17 01:05:04.436779 | orchestrator | TASK [service-cert-copy : barbican | Copying over extra CA certificates] ******* 2026-01-17 01:05:04.436786 | orchestrator | Saturday 17 January 2026 01:03:34 +0000 (0:00:00.554) 0:00:48.255 ****** 2026-01-17 01:05:04.436799 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-01-17 01:05:04.436807 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-01-17 01:05:04.436822 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-01-17 01:05:04.436830 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-01-17 01:05:04.436837 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-01-17 01:05:04.436852 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-01-17 01:05:04.436864 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-01-17 01:05:04.436871 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-01-17 01:05:04.436883 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-01-17 01:05:04.436890 | orchestrator | 2026-01-17 01:05:04.436897 | orchestrator | TASK [service-cert-copy : barbican | Copying over backend internal TLS certificate] *** 2026-01-17 01:05:04.436907 | orchestrator | Saturday 17 January 2026 01:03:37 +0000 (0:00:03.738) 0:00:51.994 ****** 2026-01-17 01:05:04.436914 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-01-17 01:05:04.436922 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-01-17 01:05:04.436928 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-01-17 01:05:04.436935 | orchestrator | skipping: [testbed-node-0] 2026-01-17 01:05:04.436946 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-01-17 01:05:04.436957 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-01-17 01:05:04.436970 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-01-17 01:05:04.436978 | orchestrator | skipping: [testbed-node-1] 2026-01-17 01:05:04.436985 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-01-17 01:05:04.436993 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-01-17 01:05:04.437003 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-01-17 01:05:04.437014 | orchestrator | skipping: [testbed-node-2] 2026-01-17 01:05:04.437021 | orchestrator | 2026-01-17 01:05:04.437028 | orchestrator | TASK [service-cert-copy : barbican | Copying over backend internal TLS key] **** 2026-01-17 01:05:04.437034 | orchestrator | Saturday 17 January 2026 01:03:40 +0000 (0:00:02.629) 0:00:54.623 ****** 2026-01-17 01:05:04.437041 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-01-17 01:05:04.437051 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-01-17 01:05:04.437058 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-01-17 01:05:04.437065 | orchestrator | skipping: [testbed-node-0] 2026-01-17 01:05:04.437072 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-01-17 01:05:04.437275 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-01-17 01:05:04.437292 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-01-17 01:05:04.437300 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-01-17 01:05:04.437308 | orchestrator | skipping: [testbed-node-1] 2026-01-17 01:05:04.437318 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-01-17 01:05:04.437326 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-01-17 01:05:04.437333 | orchestrator | skipping: [testbed-node-2] 2026-01-17 01:05:04.437340 | orchestrator | 2026-01-17 01:05:04.437347 | orchestrator | TASK [barbican : Copying over config.json files for services] ****************** 2026-01-17 01:05:04.437354 | orchestrator | Saturday 17 January 2026 01:03:42 +0000 (0:00:01.820) 0:00:56.448 ****** 2026-01-17 01:05:04.437362 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-01-17 01:05:04.437388 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-01-17 01:05:04.437428 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-01-17 01:05:04.437437 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-01-17 01:05:04.437445 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-01-17 01:05:04.437452 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-01-17 01:05:04.437467 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-01-17 01:05:04.437475 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-01-17 01:05:04.437483 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-01-17 01:05:04.437489 | orchestrator | 2026-01-17 01:05:04.437496 | orchestrator | TASK [barbican : Copying over barbican-api.ini] ******************************** 2026-01-17 01:05:04.437503 | orchestrator | Saturday 17 January 2026 01:03:46 +0000 (0:00:03.952) 0:01:00.401 ****** 2026-01-17 01:05:04.437510 | orchestrator | changed: [testbed-node-0] 2026-01-17 01:05:04.437517 | orchestrator | changed: [testbed-node-1] 2026-01-17 01:05:04.437524 | orchestrator | changed: [testbed-node-2] 2026-01-17 01:05:04.437531 | orchestrator | 2026-01-17 01:05:04.437538 | orchestrator | TASK [barbican : Checking whether barbican-api-paste.ini file exists] ********** 2026-01-17 01:05:04.437547 | orchestrator | Saturday 17 January 2026 01:03:49 +0000 (0:00:03.088) 0:01:03.489 ****** 2026-01-17 01:05:04.437555 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-01-17 01:05:04.437562 | orchestrator | 2026-01-17 01:05:04.437569 | orchestrator | TASK [barbican : Copying over barbican-api-paste.ini] ************************** 2026-01-17 01:05:04.437576 | orchestrator | Saturday 17 January 2026 01:03:50 +0000 (0:00:01.106) 0:01:04.596 ****** 2026-01-17 01:05:04.437583 | orchestrator | skipping: [testbed-node-0] 2026-01-17 01:05:04.437590 | orchestrator | skipping: [testbed-node-1] 2026-01-17 01:05:04.437597 | orchestrator | skipping: [testbed-node-2] 2026-01-17 01:05:04.437604 | orchestrator | 2026-01-17 01:05:04.437611 | orchestrator | TASK [barbican : Copying over barbican.conf] *********************************** 2026-01-17 01:05:04.437618 | orchestrator | Saturday 17 January 2026 01:03:51 +0000 (0:00:01.203) 0:01:05.800 ****** 2026-01-17 01:05:04.437625 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-01-17 01:05:04.437639 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-01-17 01:05:04.437647 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-01-17 01:05:04.437655 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-01-17 01:05:04.437664 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-01-17 01:05:04.437672 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-01-17 01:05:04.437683 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-01-17 01:05:04.437693 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-01-17 01:05:04.437700 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-01-17 01:05:04.437707 | orchestrator | 2026-01-17 01:05:04.437714 | orchestrator | TASK [barbican : Copying over existing policy file] **************************** 2026-01-17 01:05:04.437722 | orchestrator | Saturday 17 January 2026 01:04:04 +0000 (0:00:12.283) 0:01:18.083 ****** 2026-01-17 01:05:04.437729 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-01-17 01:05:04.437739 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-01-17 01:05:04.437750 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-01-17 01:05:04.437757 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-01-17 01:05:04.437764 | orchestrator | skipping: [testbed-node-1] 2026-01-17 01:05:04.437775 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-01-17 01:05:04.437783 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-01-17 01:05:04.437790 | orchestrator | skipping: [testbed-node-0] 2026-01-17 01:05:04.437800 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-01-17 01:05:04.437807 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-01-17 01:05:04.437821 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-01-17 01:05:04.437828 | orchestrator | skipping: [testbed-node-2] 2026-01-17 01:05:04.437868 | orchestrator | 2026-01-17 01:05:04.437878 | orchestrator | TASK [barbican : Check barbican containers] ************************************ 2026-01-17 01:05:04.437885 | orchestrator | Saturday 17 January 2026 01:04:05 +0000 (0:00:01.942) 0:01:20.025 ****** 2026-01-17 01:05:04.437896 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-01-17 01:05:04.437904 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-01-17 01:05:04.437914 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-01-17 01:05:04.437926 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-01-17 01:05:04.437932 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-01-17 01:05:04.437939 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-01-17 01:05:04.437949 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-01-17 01:05:04.437957 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-01-17 01:05:04.437964 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-01-17 01:05:04.437972 | orchestrator | 2026-01-17 01:05:04.437981 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2026-01-17 01:05:04.437993 | orchestrator | Saturday 17 January 2026 01:04:09 +0000 (0:00:03.953) 0:01:23.979 ****** 2026-01-17 01:05:04.438000 | orchestrator | skipping: [testbed-node-0] 2026-01-17 01:05:04.438007 | orchestrator | skipping: [testbed-node-1] 2026-01-17 01:05:04.438038 | orchestrator | skipping: [testbed-node-2] 2026-01-17 01:05:04.438045 | orchestrator | 2026-01-17 01:05:04.438053 | orchestrator | TASK [barbican : Creating barbican database] *********************************** 2026-01-17 01:05:04.438060 | orchestrator | Saturday 17 January 2026 01:04:10 +0000 (0:00:00.536) 0:01:24.516 ****** 2026-01-17 01:05:04.438067 | orchestrator | changed: [testbed-node-0] 2026-01-17 01:05:04.438073 | orchestrator | 2026-01-17 01:05:04.438080 | orchestrator | TASK [barbican : Creating barbican database user and setting permissions] ****** 2026-01-17 01:05:04.438087 | orchestrator | Saturday 17 January 2026 01:04:13 +0000 (0:00:02.550) 0:01:27.066 ****** 2026-01-17 01:05:04.438094 | orchestrator | changed: [testbed-node-0] 2026-01-17 01:05:04.438101 | orchestrator | 2026-01-17 01:05:04.438108 | orchestrator | TASK [barbican : Running barbican bootstrap container] ************************* 2026-01-17 01:05:04.438115 | orchestrator | Saturday 17 January 2026 01:04:15 +0000 (0:00:02.527) 0:01:29.594 ****** 2026-01-17 01:05:04.438123 | orchestrator | changed: [testbed-node-0] 2026-01-17 01:05:04.438129 | orchestrator | 2026-01-17 01:05:04.438136 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2026-01-17 01:05:04.438143 | orchestrator | Saturday 17 January 2026 01:04:28 +0000 (0:00:12.522) 0:01:42.117 ****** 2026-01-17 01:05:04.438150 | orchestrator | 2026-01-17 01:05:04.438157 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2026-01-17 01:05:04.438164 | orchestrator | Saturday 17 January 2026 01:04:28 +0000 (0:00:00.075) 0:01:42.192 ****** 2026-01-17 01:05:04.438171 | orchestrator | 2026-01-17 01:05:04.438178 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2026-01-17 01:05:04.438185 | orchestrator | Saturday 17 January 2026 01:04:28 +0000 (0:00:00.068) 0:01:42.261 ****** 2026-01-17 01:05:04.438192 | orchestrator | 2026-01-17 01:05:04.438199 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-api container] ******************** 2026-01-17 01:05:04.438206 | orchestrator | Saturday 17 January 2026 01:04:28 +0000 (0:00:00.077) 0:01:42.339 ****** 2026-01-17 01:05:04.438213 | orchestrator | changed: [testbed-node-0] 2026-01-17 01:05:04.438220 | orchestrator | changed: [testbed-node-1] 2026-01-17 01:05:04.438227 | orchestrator | changed: [testbed-node-2] 2026-01-17 01:05:04.438234 | orchestrator | 2026-01-17 01:05:04.438242 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-keystone-listener container] ****** 2026-01-17 01:05:04.438249 | orchestrator | Saturday 17 January 2026 01:04:42 +0000 (0:00:14.125) 0:01:56.465 ****** 2026-01-17 01:05:04.438256 | orchestrator | changed: [testbed-node-0] 2026-01-17 01:05:04.438263 | orchestrator | changed: [testbed-node-2] 2026-01-17 01:05:04.438270 | orchestrator | changed: [testbed-node-1] 2026-01-17 01:05:04.438277 | orchestrator | 2026-01-17 01:05:04.438284 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-worker container] ***************** 2026-01-17 01:05:04.438291 | orchestrator | Saturday 17 January 2026 01:04:52 +0000 (0:00:09.748) 0:02:06.213 ****** 2026-01-17 01:05:04.438298 | orchestrator | changed: [testbed-node-1] 2026-01-17 01:05:04.438305 | orchestrator | changed: [testbed-node-0] 2026-01-17 01:05:04.438312 | orchestrator | changed: [testbed-node-2] 2026-01-17 01:05:04.438319 | orchestrator | 2026-01-17 01:05:04.438326 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-17 01:05:04.438338 | orchestrator | testbed-node-0 : ok=24  changed=19  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-01-17 01:05:04.438346 | orchestrator | testbed-node-1 : ok=14  changed=10  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-01-17 01:05:04.438353 | orchestrator | testbed-node-2 : ok=14  changed=10  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-01-17 01:05:04.438365 | orchestrator | 2026-01-17 01:05:04.438372 | orchestrator | 2026-01-17 01:05:04.438392 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-17 01:05:04.438399 | orchestrator | Saturday 17 January 2026 01:05:03 +0000 (0:00:11.491) 0:02:17.704 ****** 2026-01-17 01:05:04.438406 | orchestrator | =============================================================================== 2026-01-17 01:05:04.438412 | orchestrator | service-ks-register : barbican | Creating roles ------------------------ 16.82s 2026-01-17 01:05:04.438420 | orchestrator | barbican : Restart barbican-api container ------------------------------ 14.13s 2026-01-17 01:05:04.438428 | orchestrator | barbican : Running barbican bootstrap container ------------------------ 12.52s 2026-01-17 01:05:04.438434 | orchestrator | barbican : Copying over barbican.conf ---------------------------------- 12.28s 2026-01-17 01:05:04.438441 | orchestrator | barbican : Restart barbican-worker container --------------------------- 11.49s 2026-01-17 01:05:04.438448 | orchestrator | barbican : Restart barbican-keystone-listener container ----------------- 9.75s 2026-01-17 01:05:04.438455 | orchestrator | service-ks-register : barbican | Creating endpoints --------------------- 6.87s 2026-01-17 01:05:04.438462 | orchestrator | service-ks-register : barbican | Creating users ------------------------- 4.89s 2026-01-17 01:05:04.438469 | orchestrator | service-ks-register : barbican | Granting user roles -------------------- 4.34s 2026-01-17 01:05:04.438476 | orchestrator | service-ks-register : barbican | Creating services ---------------------- 3.96s 2026-01-17 01:05:04.438483 | orchestrator | barbican : Check barbican containers ------------------------------------ 3.95s 2026-01-17 01:05:04.438490 | orchestrator | barbican : Copying over config.json files for services ------------------ 3.95s 2026-01-17 01:05:04.438497 | orchestrator | service-cert-copy : barbican | Copying over extra CA certificates ------- 3.74s 2026-01-17 01:05:04.438504 | orchestrator | service-ks-register : barbican | Creating projects ---------------------- 3.68s 2026-01-17 01:05:04.438516 | orchestrator | barbican : Ensuring config directories exist ---------------------------- 3.13s 2026-01-17 01:05:04.438523 | orchestrator | barbican : Copying over barbican-api.ini -------------------------------- 3.09s 2026-01-17 01:05:04.438530 | orchestrator | service-cert-copy : barbican | Copying over backend internal TLS certificate --- 2.63s 2026-01-17 01:05:04.438537 | orchestrator | barbican : Creating barbican database ----------------------------------- 2.55s 2026-01-17 01:05:04.438544 | orchestrator | barbican : Creating barbican database user and setting permissions ------ 2.53s 2026-01-17 01:05:04.438551 | orchestrator | barbican : Copying over existing policy file ---------------------------- 1.94s 2026-01-17 01:05:04.438558 | orchestrator | 2026-01-17 01:05:04 | INFO  | Task 7c0cfeff-5f53-4e00-8a2f-3700890c45c2 is in state STARTED 2026-01-17 01:05:04.438565 | orchestrator | 2026-01-17 01:05:04 | INFO  | Task 65437b38-f15f-48c3-8b3b-c3322790da8e is in state STARTED 2026-01-17 01:05:04.438573 | orchestrator | 2026-01-17 01:05:04 | INFO  | Task 29e001bb-6e11-4a45-8c2e-826bccfc99aa is in state STARTED 2026-01-17 01:05:04.438580 | orchestrator | 2026-01-17 01:05:04 | INFO  | Wait 1 second(s) until the next check 2026-01-17 01:05:07.464073 | orchestrator | 2026-01-17 01:05:07 | INFO  | Task e6c45b1b-c67c-45dd-bbc2-5d9c7663383c is in state STARTED 2026-01-17 01:05:07.464662 | orchestrator | 2026-01-17 01:05:07 | INFO  | Task 7c0cfeff-5f53-4e00-8a2f-3700890c45c2 is in state STARTED 2026-01-17 01:05:07.465526 | orchestrator | 2026-01-17 01:05:07 | INFO  | Task 65437b38-f15f-48c3-8b3b-c3322790da8e is in state STARTED 2026-01-17 01:05:07.466442 | orchestrator | 2026-01-17 01:05:07 | INFO  | Task 29e001bb-6e11-4a45-8c2e-826bccfc99aa is in state STARTED 2026-01-17 01:05:07.466475 | orchestrator | 2026-01-17 01:05:07 | INFO  | Wait 1 second(s) until the next check 2026-01-17 01:05:10.494715 | orchestrator | 2026-01-17 01:05:10 | INFO  | Task e6c45b1b-c67c-45dd-bbc2-5d9c7663383c is in state STARTED 2026-01-17 01:05:10.495203 | orchestrator | 2026-01-17 01:05:10 | INFO  | Task 7c0cfeff-5f53-4e00-8a2f-3700890c45c2 is in state STARTED 2026-01-17 01:05:10.495808 | orchestrator | 2026-01-17 01:05:10 | INFO  | Task 65437b38-f15f-48c3-8b3b-c3322790da8e is in state STARTED 2026-01-17 01:05:10.496566 | orchestrator | 2026-01-17 01:05:10 | INFO  | Task 29e001bb-6e11-4a45-8c2e-826bccfc99aa is in state STARTED 2026-01-17 01:05:10.496607 | orchestrator | 2026-01-17 01:05:10 | INFO  | Wait 1 second(s) until the next check 2026-01-17 01:05:13.520547 | orchestrator | 2026-01-17 01:05:13 | INFO  | Task e6c45b1b-c67c-45dd-bbc2-5d9c7663383c is in state STARTED 2026-01-17 01:05:13.523405 | orchestrator | 2026-01-17 01:05:13 | INFO  | Task 7c0cfeff-5f53-4e00-8a2f-3700890c45c2 is in state STARTED 2026-01-17 01:05:13.523924 | orchestrator | 2026-01-17 01:05:13 | INFO  | Task 65437b38-f15f-48c3-8b3b-c3322790da8e is in state STARTED 2026-01-17 01:05:13.524340 | orchestrator | 2026-01-17 01:05:13 | INFO  | Task 29e001bb-6e11-4a45-8c2e-826bccfc99aa is in state STARTED 2026-01-17 01:05:13.524440 | orchestrator | 2026-01-17 01:05:13 | INFO  | Wait 1 second(s) until the next check 2026-01-17 01:05:16.552202 | orchestrator | 2026-01-17 01:05:16 | INFO  | Task e6c45b1b-c67c-45dd-bbc2-5d9c7663383c is in state STARTED 2026-01-17 01:05:16.553006 | orchestrator | 2026-01-17 01:05:16 | INFO  | Task 7c0cfeff-5f53-4e00-8a2f-3700890c45c2 is in state STARTED 2026-01-17 01:05:16.553761 | orchestrator | 2026-01-17 01:05:16 | INFO  | Task 65437b38-f15f-48c3-8b3b-c3322790da8e is in state STARTED 2026-01-17 01:05:16.554751 | orchestrator | 2026-01-17 01:05:16 | INFO  | Task 29e001bb-6e11-4a45-8c2e-826bccfc99aa is in state STARTED 2026-01-17 01:05:16.554775 | orchestrator | 2026-01-17 01:05:16 | INFO  | Wait 1 second(s) until the next check 2026-01-17 01:05:19.593451 | orchestrator | 2026-01-17 01:05:19 | INFO  | Task e6c45b1b-c67c-45dd-bbc2-5d9c7663383c is in state STARTED 2026-01-17 01:05:19.594790 | orchestrator | 2026-01-17 01:05:19 | INFO  | Task 7c0cfeff-5f53-4e00-8a2f-3700890c45c2 is in state STARTED 2026-01-17 01:05:19.596998 | orchestrator | 2026-01-17 01:05:19 | INFO  | Task 65437b38-f15f-48c3-8b3b-c3322790da8e is in state STARTED 2026-01-17 01:05:19.598649 | orchestrator | 2026-01-17 01:05:19 | INFO  | Task 29e001bb-6e11-4a45-8c2e-826bccfc99aa is in state STARTED 2026-01-17 01:05:19.599676 | orchestrator | 2026-01-17 01:05:19 | INFO  | Wait 1 second(s) until the next check 2026-01-17 01:05:22.644322 | orchestrator | 2026-01-17 01:05:22 | INFO  | Task e6c45b1b-c67c-45dd-bbc2-5d9c7663383c is in state STARTED 2026-01-17 01:05:22.646828 | orchestrator | 2026-01-17 01:05:22 | INFO  | Task 7c0cfeff-5f53-4e00-8a2f-3700890c45c2 is in state STARTED 2026-01-17 01:05:22.649702 | orchestrator | 2026-01-17 01:05:22 | INFO  | Task 65437b38-f15f-48c3-8b3b-c3322790da8e is in state STARTED 2026-01-17 01:05:22.651891 | orchestrator | 2026-01-17 01:05:22 | INFO  | Task 29e001bb-6e11-4a45-8c2e-826bccfc99aa is in state STARTED 2026-01-17 01:05:22.651968 | orchestrator | 2026-01-17 01:05:22 | INFO  | Wait 1 second(s) until the next check 2026-01-17 01:05:25.693304 | orchestrator | 2026-01-17 01:05:25 | INFO  | Task e6c45b1b-c67c-45dd-bbc2-5d9c7663383c is in state STARTED 2026-01-17 01:05:25.695939 | orchestrator | 2026-01-17 01:05:25 | INFO  | Task 7c0cfeff-5f53-4e00-8a2f-3700890c45c2 is in state STARTED 2026-01-17 01:05:25.698086 | orchestrator | 2026-01-17 01:05:25 | INFO  | Task 65437b38-f15f-48c3-8b3b-c3322790da8e is in state STARTED 2026-01-17 01:05:25.700064 | orchestrator | 2026-01-17 01:05:25 | INFO  | Task 29e001bb-6e11-4a45-8c2e-826bccfc99aa is in state STARTED 2026-01-17 01:05:25.700095 | orchestrator | 2026-01-17 01:05:25 | INFO  | Wait 1 second(s) until the next check 2026-01-17 01:05:28.751866 | orchestrator | 2026-01-17 01:05:28 | INFO  | Task e6c45b1b-c67c-45dd-bbc2-5d9c7663383c is in state STARTED 2026-01-17 01:05:28.752318 | orchestrator | 2026-01-17 01:05:28 | INFO  | Task 7c0cfeff-5f53-4e00-8a2f-3700890c45c2 is in state STARTED 2026-01-17 01:05:28.753013 | orchestrator | 2026-01-17 01:05:28 | INFO  | Task 65437b38-f15f-48c3-8b3b-c3322790da8e is in state STARTED 2026-01-17 01:05:28.754416 | orchestrator | 2026-01-17 01:05:28 | INFO  | Task 29e001bb-6e11-4a45-8c2e-826bccfc99aa is in state STARTED 2026-01-17 01:05:28.754464 | orchestrator | 2026-01-17 01:05:28 | INFO  | Wait 1 second(s) until the next check 2026-01-17 01:05:31.790446 | orchestrator | 2026-01-17 01:05:31 | INFO  | Task e6c45b1b-c67c-45dd-bbc2-5d9c7663383c is in state STARTED 2026-01-17 01:05:31.791282 | orchestrator | 2026-01-17 01:05:31 | INFO  | Task 7c0cfeff-5f53-4e00-8a2f-3700890c45c2 is in state STARTED 2026-01-17 01:05:31.792114 | orchestrator | 2026-01-17 01:05:31 | INFO  | Task 65437b38-f15f-48c3-8b3b-c3322790da8e is in state STARTED 2026-01-17 01:05:31.793239 | orchestrator | 2026-01-17 01:05:31 | INFO  | Task 29e001bb-6e11-4a45-8c2e-826bccfc99aa is in state STARTED 2026-01-17 01:05:31.793272 | orchestrator | 2026-01-17 01:05:31 | INFO  | Wait 1 second(s) until the next check 2026-01-17 01:05:34.839747 | orchestrator | 2026-01-17 01:05:34 | INFO  | Task e6c45b1b-c67c-45dd-bbc2-5d9c7663383c is in state STARTED 2026-01-17 01:05:34.842196 | orchestrator | 2026-01-17 01:05:34 | INFO  | Task 7c0cfeff-5f53-4e00-8a2f-3700890c45c2 is in state STARTED 2026-01-17 01:05:34.844400 | orchestrator | 2026-01-17 01:05:34 | INFO  | Task 65437b38-f15f-48c3-8b3b-c3322790da8e is in state STARTED 2026-01-17 01:05:34.845997 | orchestrator | 2026-01-17 01:05:34 | INFO  | Task 29e001bb-6e11-4a45-8c2e-826bccfc99aa is in state STARTED 2026-01-17 01:05:34.846061 | orchestrator | 2026-01-17 01:05:34 | INFO  | Wait 1 second(s) until the next check 2026-01-17 01:05:37.889212 | orchestrator | 2026-01-17 01:05:37 | INFO  | Task e6c45b1b-c67c-45dd-bbc2-5d9c7663383c is in state STARTED 2026-01-17 01:05:37.892938 | orchestrator | 2026-01-17 01:05:37 | INFO  | Task 7c0cfeff-5f53-4e00-8a2f-3700890c45c2 is in state STARTED 2026-01-17 01:05:37.894372 | orchestrator | 2026-01-17 01:05:37 | INFO  | Task 65437b38-f15f-48c3-8b3b-c3322790da8e is in state STARTED 2026-01-17 01:05:37.895702 | orchestrator | 2026-01-17 01:05:37 | INFO  | Task 29e001bb-6e11-4a45-8c2e-826bccfc99aa is in state STARTED 2026-01-17 01:05:37.895848 | orchestrator | 2026-01-17 01:05:37 | INFO  | Wait 1 second(s) until the next check 2026-01-17 01:05:40.935471 | orchestrator | 2026-01-17 01:05:40 | INFO  | Task e6c45b1b-c67c-45dd-bbc2-5d9c7663383c is in state STARTED 2026-01-17 01:05:40.935642 | orchestrator | 2026-01-17 01:05:40 | INFO  | Task 7c0cfeff-5f53-4e00-8a2f-3700890c45c2 is in state STARTED 2026-01-17 01:05:40.936813 | orchestrator | 2026-01-17 01:05:40 | INFO  | Task 65437b38-f15f-48c3-8b3b-c3322790da8e is in state STARTED 2026-01-17 01:05:40.937298 | orchestrator | 2026-01-17 01:05:40 | INFO  | Task 29e001bb-6e11-4a45-8c2e-826bccfc99aa is in state STARTED 2026-01-17 01:05:40.937369 | orchestrator | 2026-01-17 01:05:40 | INFO  | Wait 1 second(s) until the next check 2026-01-17 01:05:43.980264 | orchestrator | 2026-01-17 01:05:43 | INFO  | Task e6c45b1b-c67c-45dd-bbc2-5d9c7663383c is in state STARTED 2026-01-17 01:05:43.983529 | orchestrator | 2026-01-17 01:05:43 | INFO  | Task 7c0cfeff-5f53-4e00-8a2f-3700890c45c2 is in state STARTED 2026-01-17 01:05:43.986677 | orchestrator | 2026-01-17 01:05:43 | INFO  | Task 65437b38-f15f-48c3-8b3b-c3322790da8e is in state STARTED 2026-01-17 01:05:43.989367 | orchestrator | 2026-01-17 01:05:43 | INFO  | Task 29e001bb-6e11-4a45-8c2e-826bccfc99aa is in state STARTED 2026-01-17 01:05:43.989439 | orchestrator | 2026-01-17 01:05:43 | INFO  | Wait 1 second(s) until the next check 2026-01-17 01:05:47.040409 | orchestrator | 2026-01-17 01:05:47 | INFO  | Task e6c45b1b-c67c-45dd-bbc2-5d9c7663383c is in state STARTED 2026-01-17 01:05:47.042413 | orchestrator | 2026-01-17 01:05:47 | INFO  | Task 7c0cfeff-5f53-4e00-8a2f-3700890c45c2 is in state STARTED 2026-01-17 01:05:47.044560 | orchestrator | 2026-01-17 01:05:47 | INFO  | Task 65437b38-f15f-48c3-8b3b-c3322790da8e is in state STARTED 2026-01-17 01:05:47.046617 | orchestrator | 2026-01-17 01:05:47 | INFO  | Task 29e001bb-6e11-4a45-8c2e-826bccfc99aa is in state STARTED 2026-01-17 01:05:47.046645 | orchestrator | 2026-01-17 01:05:47 | INFO  | Wait 1 second(s) until the next check 2026-01-17 01:05:50.103895 | orchestrator | 2026-01-17 01:05:50 | INFO  | Task e6c45b1b-c67c-45dd-bbc2-5d9c7663383c is in state STARTED 2026-01-17 01:05:50.108605 | orchestrator | 2026-01-17 01:05:50 | INFO  | Task 7c0cfeff-5f53-4e00-8a2f-3700890c45c2 is in state STARTED 2026-01-17 01:05:50.113466 | orchestrator | 2026-01-17 01:05:50 | INFO  | Task 6ab6a137-e94f-4a62-9ab1-2d2eb6ed263c is in state STARTED 2026-01-17 01:05:50.119181 | orchestrator | 2026-01-17 01:05:50 | INFO  | Task 65437b38-f15f-48c3-8b3b-c3322790da8e is in state STARTED 2026-01-17 01:05:50.123814 | orchestrator | 2026-01-17 01:05:50 | INFO  | Task 29e001bb-6e11-4a45-8c2e-826bccfc99aa is in state SUCCESS 2026-01-17 01:05:50.125199 | orchestrator | 2026-01-17 01:05:50.125265 | orchestrator | 2026-01-17 01:05:50.125275 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-17 01:05:50.125283 | orchestrator | 2026-01-17 01:05:50.125289 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-01-17 01:05:50.125295 | orchestrator | Saturday 17 January 2026 01:02:46 +0000 (0:00:00.269) 0:00:00.269 ****** 2026-01-17 01:05:50.125301 | orchestrator | ok: [testbed-node-0] 2026-01-17 01:05:50.125330 | orchestrator | ok: [testbed-node-1] 2026-01-17 01:05:50.125337 | orchestrator | ok: [testbed-node-2] 2026-01-17 01:05:50.125343 | orchestrator | 2026-01-17 01:05:50.125349 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-17 01:05:50.125355 | orchestrator | Saturday 17 January 2026 01:02:46 +0000 (0:00:00.326) 0:00:00.596 ****** 2026-01-17 01:05:50.125361 | orchestrator | ok: [testbed-node-0] => (item=enable_designate_True) 2026-01-17 01:05:50.125368 | orchestrator | ok: [testbed-node-1] => (item=enable_designate_True) 2026-01-17 01:05:50.125382 | orchestrator | ok: [testbed-node-2] => (item=enable_designate_True) 2026-01-17 01:05:50.125388 | orchestrator | 2026-01-17 01:05:50.125394 | orchestrator | PLAY [Apply role designate] **************************************************** 2026-01-17 01:05:50.125401 | orchestrator | 2026-01-17 01:05:50.125407 | orchestrator | TASK [designate : include_tasks] *********************************************** 2026-01-17 01:05:50.125413 | orchestrator | Saturday 17 January 2026 01:02:46 +0000 (0:00:00.430) 0:00:01.027 ****** 2026-01-17 01:05:50.125420 | orchestrator | included: /ansible/roles/designate/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-17 01:05:50.125427 | orchestrator | 2026-01-17 01:05:50.125433 | orchestrator | TASK [service-ks-register : designate | Creating services] ********************* 2026-01-17 01:05:50.125439 | orchestrator | Saturday 17 January 2026 01:02:47 +0000 (0:00:00.662) 0:00:01.689 ****** 2026-01-17 01:05:50.125445 | orchestrator | changed: [testbed-node-0] => (item=designate (dns)) 2026-01-17 01:05:50.125452 | orchestrator | 2026-01-17 01:05:50.125458 | orchestrator | TASK [service-ks-register : designate | Creating endpoints] ******************** 2026-01-17 01:05:50.125465 | orchestrator | Saturday 17 January 2026 01:02:51 +0000 (0:00:03.638) 0:00:05.327 ****** 2026-01-17 01:05:50.125470 | orchestrator | changed: [testbed-node-0] => (item=designate -> https://api-int.testbed.osism.xyz:9001 -> internal) 2026-01-17 01:05:50.125496 | orchestrator | changed: [testbed-node-0] => (item=designate -> https://api.testbed.osism.xyz:9001 -> public) 2026-01-17 01:05:50.125500 | orchestrator | 2026-01-17 01:05:50.125504 | orchestrator | TASK [service-ks-register : designate | Creating projects] ********************* 2026-01-17 01:05:50.125508 | orchestrator | Saturday 17 January 2026 01:02:58 +0000 (0:00:07.785) 0:00:13.112 ****** 2026-01-17 01:05:50.125512 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-01-17 01:05:50.125516 | orchestrator | 2026-01-17 01:05:50.125520 | orchestrator | TASK [service-ks-register : designate | Creating users] ************************ 2026-01-17 01:05:50.125524 | orchestrator | Saturday 17 January 2026 01:03:02 +0000 (0:00:03.730) 0:00:16.843 ****** 2026-01-17 01:05:50.125528 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-01-17 01:05:50.125532 | orchestrator | changed: [testbed-node-0] => (item=designate -> service) 2026-01-17 01:05:50.125536 | orchestrator | 2026-01-17 01:05:50.125540 | orchestrator | TASK [service-ks-register : designate | Creating roles] ************************ 2026-01-17 01:05:50.125544 | orchestrator | Saturday 17 January 2026 01:03:06 +0000 (0:00:04.155) 0:00:20.998 ****** 2026-01-17 01:05:50.125548 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-01-17 01:05:50.125552 | orchestrator | 2026-01-17 01:05:50.125556 | orchestrator | TASK [service-ks-register : designate | Granting user roles] ******************* 2026-01-17 01:05:50.125559 | orchestrator | Saturday 17 January 2026 01:03:10 +0000 (0:00:04.106) 0:00:25.105 ****** 2026-01-17 01:05:50.125563 | orchestrator | changed: [testbed-node-0] => (item=designate -> service -> admin) 2026-01-17 01:05:50.125567 | orchestrator | 2026-01-17 01:05:50.125582 | orchestrator | TASK [designate : Ensuring config directories exist] *************************** 2026-01-17 01:05:50.125585 | orchestrator | Saturday 17 January 2026 01:03:14 +0000 (0:00:03.935) 0:00:29.040 ****** 2026-01-17 01:05:50.125592 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-01-17 01:05:50.125616 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-01-17 01:05:50.125621 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-01-17 01:05:50.125630 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-01-17 01:05:50.125636 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-01-17 01:05:50.125643 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-01-17 01:05:50.125647 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-01-17 01:05:50.125657 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-01-17 01:05:50.125662 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-01-17 01:05:50.125674 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-01-17 01:05:50.125681 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-01-17 01:05:50.125687 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-01-17 01:05:50.125696 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-01-17 01:05:50.125703 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-01-17 01:05:50.126128 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-01-17 01:05:50.126148 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-01-17 01:05:50.126165 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-01-17 01:05:50.126170 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-01-17 01:05:50.126175 | orchestrator | 2026-01-17 01:05:50.126180 | orchestrator | TASK [designate : Check if policies shall be overwritten] ********************** 2026-01-17 01:05:50.126184 | orchestrator | Saturday 17 January 2026 01:03:17 +0000 (0:00:03.069) 0:00:32.109 ****** 2026-01-17 01:05:50.126189 | orchestrator | skipping: [testbed-node-0] 2026-01-17 01:05:50.126194 | orchestrator | 2026-01-17 01:05:50.126199 | orchestrator | TASK [designate : Set designate policy file] *********************************** 2026-01-17 01:05:50.126203 | orchestrator | Saturday 17 January 2026 01:03:18 +0000 (0:00:00.161) 0:00:32.271 ****** 2026-01-17 01:05:50.126207 | orchestrator | skipping: [testbed-node-0] 2026-01-17 01:05:50.126212 | orchestrator | skipping: [testbed-node-1] 2026-01-17 01:05:50.126217 | orchestrator | skipping: [testbed-node-2] 2026-01-17 01:05:50.126221 | orchestrator | 2026-01-17 01:05:50.126225 | orchestrator | TASK [designate : include_tasks] *********************************************** 2026-01-17 01:05:50.126236 | orchestrator | Saturday 17 January 2026 01:03:18 +0000 (0:00:00.317) 0:00:32.589 ****** 2026-01-17 01:05:50.126241 | orchestrator | included: /ansible/roles/designate/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-17 01:05:50.126246 | orchestrator | 2026-01-17 01:05:50.126250 | orchestrator | TASK [service-cert-copy : designate | Copying over extra CA certificates] ****** 2026-01-17 01:05:50.126254 | orchestrator | Saturday 17 January 2026 01:03:19 +0000 (0:00:00.858) 0:00:33.447 ****** 2026-01-17 01:05:50.126260 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-01-17 01:05:50.126274 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-01-17 01:05:50.126285 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-01-17 01:05:50.126291 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-01-17 01:05:50.126301 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-01-17 01:05:50.126307 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-01-17 01:05:50.126334 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-01-17 01:05:50.126349 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-01-17 01:05:50.126355 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-01-17 01:05:50.126362 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-01-17 01:05:50.126369 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-01-17 01:05:50.126380 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-01-17 01:05:50.126387 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-01-17 01:05:50.126397 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-01-17 01:05:50.126413 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-01-17 01:05:50.126423 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-01-17 01:05:50.126429 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-01-17 01:05:50.126435 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-01-17 01:05:50.126441 | orchestrator | 2026-01-17 01:05:50.126448 | orchestrator | TASK [service-cert-copy : designate | Copying over backend internal TLS certificate] *** 2026-01-17 01:05:50.126455 | orchestrator | Saturday 17 January 2026 01:03:25 +0000 (0:00:06.019) 0:00:39.467 ****** 2026-01-17 01:05:50.126465 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-01-17 01:05:50.126481 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-01-17 01:05:50.126493 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-01-17 01:05:50.126500 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-01-17 01:05:50.126505 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-01-17 01:05:50.126510 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-01-17 01:05:50.126515 | orchestrator | skipping: [testbed-node-0] 2026-01-17 01:05:50.126520 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-01-17 01:05:50.126529 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-01-17 01:05:50.126839 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-01-17 01:05:50.126947 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-01-17 01:05:50.126957 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-01-17 01:05:50.126963 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-01-17 01:05:50.126967 | orchestrator | skipping: [testbed-node-1] 2026-01-17 01:05:50.126977 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-01-17 01:05:50.126991 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-01-17 01:05:50.127028 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-01-17 01:05:50.127037 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-01-17 01:05:50.127044 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-01-17 01:05:50.127051 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-01-17 01:05:50.127057 | orchestrator | skipping: [testbed-node-2] 2026-01-17 01:05:50.127064 | orchestrator | 2026-01-17 01:05:50.127070 | orchestrator | TASK [service-cert-copy : designate | Copying over backend internal TLS key] *** 2026-01-17 01:05:50.127218 | orchestrator | Saturday 17 January 2026 01:03:26 +0000 (0:00:00.844) 0:00:40.311 ****** 2026-01-17 01:05:50.127237 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-01-17 01:05:50.127252 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-01-17 01:05:50.127270 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-01-17 01:05:50.127275 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-01-17 01:05:50.127280 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-01-17 01:05:50.127284 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-01-17 01:05:50.127289 | orchestrator | skipping: [testbed-node-1] 2026-01-17 01:05:50.127297 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-01-17 01:05:50.127305 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-01-17 01:05:50.127423 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-01-17 01:05:50.127430 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-01-17 01:05:50.127435 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-01-17 01:05:50.127439 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-01-17 01:05:50.127444 | orchestrator | skipping: [testbed-node-2] 2026-01-17 01:05:50.127452 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-01-17 01:05:50.127461 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-01-17 01:05:50.127465 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-01-17 01:05:50.127480 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-01-17 01:05:50.127485 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-01-17 01:05:50.127490 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-01-17 01:05:50.127494 | orchestrator | skipping: [testbed-node-0] 2026-01-17 01:05:50.127498 | orchestrator | 2026-01-17 01:05:50.127503 | orchestrator | TASK [designate : Copying over config.json files for services] ***************** 2026-01-17 01:05:50.127508 | orchestrator | Saturday 17 January 2026 01:03:28 +0000 (0:00:02.132) 0:00:42.444 ****** 2026-01-17 01:05:50.127515 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-01-17 01:05:50.127523 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-01-17 01:05:50.127539 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-01-17 01:05:50.127544 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-01-17 01:05:50.127549 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-01-17 01:05:50.127554 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-01-17 01:05:50.127565 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-01-17 01:05:50.127570 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-01-17 01:05:50.127588 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-01-17 01:05:50.127595 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-01-17 01:05:50.127603 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-01-17 01:05:50.127609 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-01-17 01:05:50.127626 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-01-17 01:05:50.127638 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-01-17 01:05:50.127645 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-01-17 01:05:50.127668 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-01-17 01:05:50.127676 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-01-17 01:05:50.127683 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-01-17 01:05:50.127690 | orchestrator | 2026-01-17 01:05:50.127697 | orchestrator | TASK [designate : Copying over designate.conf] ********************************* 2026-01-17 01:05:50.127704 | orchestrator | Saturday 17 January 2026 01:03:36 +0000 (0:00:08.335) 0:00:50.779 ****** 2026-01-17 01:05:50.127714 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-01-17 01:05:50.127722 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-01-17 01:05:50.127727 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-01-17 01:05:50.127745 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-01-17 01:05:50.127751 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-01-17 01:05:50.127755 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-01-17 01:05:50.127764 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-01-17 01:05:50.127771 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-01-17 01:05:50.127777 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-01-17 01:05:50.127787 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-01-17 01:05:50.127793 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-01-17 01:05:50.127799 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-01-17 01:05:50.127811 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-01-17 01:05:50.127818 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-01-17 01:05:50.127828 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-01-17 01:05:50.127836 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-01-17 01:05:50.127850 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-01-17 01:05:50.127858 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-01-17 01:05:50.127864 | orchestrator | 2026-01-17 01:05:50.127871 | orchestrator | TASK [designate : Copying over pools.yaml] ************************************* 2026-01-17 01:05:50.127883 | orchestrator | Saturday 17 January 2026 01:03:59 +0000 (0:00:22.731) 0:01:13.511 ****** 2026-01-17 01:05:50.127889 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2026-01-17 01:05:50.127895 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2026-01-17 01:05:50.127901 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2026-01-17 01:05:50.127909 | orchestrator | 2026-01-17 01:05:50.127915 | orchestrator | TASK [designate : Copying over named.conf] ************************************* 2026-01-17 01:05:50.127922 | orchestrator | Saturday 17 January 2026 01:04:08 +0000 (0:00:08.913) 0:01:22.424 ****** 2026-01-17 01:05:50.127928 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/designate/templates/named.conf.j2) 2026-01-17 01:05:50.127934 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/designate/templates/named.conf.j2) 2026-01-17 01:05:50.127941 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/designate/templates/named.conf.j2) 2026-01-17 01:05:50.127945 | orchestrator | 2026-01-17 01:05:50.127949 | orchestrator | TASK [designate : Copying over rndc.conf] ************************************** 2026-01-17 01:05:50.127953 | orchestrator | Saturday 17 January 2026 01:04:11 +0000 (0:00:03.608) 0:01:26.033 ****** 2026-01-17 01:05:50.127957 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-01-17 01:05:50.127964 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-01-17 01:05:50.127974 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-01-17 01:05:50.127979 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-01-17 01:05:50.127986 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-01-17 01:05:50.127990 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-01-17 01:05:50.127998 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-01-17 01:05:50.128002 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-01-17 01:05:50.128006 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-01-17 01:05:50.128013 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-01-17 01:05:50.128020 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-01-17 01:05:50.128025 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-01-17 01:05:50.128029 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-01-17 01:05:50.128036 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-01-17 01:05:50.128040 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-01-17 01:05:50.128044 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-01-17 01:05:50.128054 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-01-17 01:05:50.128058 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-01-17 01:05:50.128062 | orchestrator | 2026-01-17 01:05:50.128066 | orchestrator | TASK [designate : Copying over rndc.key] *************************************** 2026-01-17 01:05:50.128069 | orchestrator | Saturday 17 January 2026 01:04:15 +0000 (0:00:03.735) 0:01:29.768 ****** 2026-01-17 01:05:50.128074 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-01-17 01:05:50.128082 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-01-17 01:05:50.128086 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-01-17 01:05:50.128097 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-01-17 01:05:50.128101 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-01-17 01:05:50.128105 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-01-17 01:05:50.128109 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-01-17 01:05:50.128116 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-01-17 01:05:50.128120 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-01-17 01:05:50.128124 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-01-17 01:05:50.128134 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-01-17 01:05:50.128138 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-01-17 01:05:50.128142 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 2026-01-17 01:05:50 | INFO  | Wait 1 second(s) until the next check 2026-01-17 01:05:50.128147 | orchestrator | 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-01-17 01:05:50.128151 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-01-17 01:05:50.128157 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-01-17 01:05:50.128161 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-01-17 01:05:50.128173 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-01-17 01:05:50.128177 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-01-17 01:05:50.128181 | orchestrator | 2026-01-17 01:05:50.128185 | orchestrator | TASK [designate : include_tasks] *********************************************** 2026-01-17 01:05:50.128189 | orchestrator | Saturday 17 January 2026 01:04:19 +0000 (0:00:03.791) 0:01:33.560 ****** 2026-01-17 01:05:50.128193 | orchestrator | skipping: [testbed-node-0] 2026-01-17 01:05:50.128197 | orchestrator | skipping: [testbed-node-1] 2026-01-17 01:05:50.128201 | orchestrator | skipping: [testbed-node-2] 2026-01-17 01:05:50.128204 | orchestrator | 2026-01-17 01:05:50.128208 | orchestrator | TASK [designate : Copying over existing policy file] *************************** 2026-01-17 01:05:50.128212 | orchestrator | Saturday 17 January 2026 01:04:20 +0000 (0:00:00.991) 0:01:34.551 ****** 2026-01-17 01:05:50.128216 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-01-17 01:05:50.128223 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-01-17 01:05:50.128227 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-01-17 01:05:50.128234 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-01-17 01:05:50.128241 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-01-17 01:05:50.128245 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-01-17 01:05:50.128249 | orchestrator | skipping: [testbed-node-0] 2026-01-17 01:05:50.128253 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-01-17 01:05:50.128257 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-01-17 01:05:50.128264 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-01-17 01:05:50.128271 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-01-17 01:05:50.128278 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-01-17 01:05:50.128283 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-01-17 01:05:50.128287 | orchestrator | skipping: [testbed-node-1] 2026-01-17 01:05:50.128291 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-01-17 01:05:50.128295 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-01-17 01:05:50.128302 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-01-17 01:05:50.128334 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-01-17 01:05:50.128342 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-01-17 01:05:50.128346 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-01-17 01:05:50.128350 | orchestrator | skipping: [testbed-node-2] 2026-01-17 01:05:50.128354 | orchestrator | 2026-01-17 01:05:50.128358 | orchestrator | TASK [designate : Check designate containers] ********************************** 2026-01-17 01:05:50.128362 | orchestrator | Saturday 17 January 2026 01:04:21 +0000 (0:00:00.922) 0:01:35.473 ****** 2026-01-17 01:05:50.128366 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-01-17 01:05:50.128370 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-01-17 01:05:50.128381 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-01-17 01:05:50.128388 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-01-17 01:05:50.128392 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-01-17 01:05:50.128397 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-01-17 01:05:50.128401 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-01-17 01:05:50.128405 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-01-17 01:05:50.128417 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-01-17 01:05:50.128421 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-01-17 01:05:50.128431 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-01-17 01:05:50.128436 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-01-17 01:05:50.128440 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-01-17 01:05:50.128444 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-01-17 01:05:50.128448 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-01-17 01:05:50.128458 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-01-17 01:05:50.128463 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-01-17 01:05:50.128471 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-01-17 01:05:50.128475 | orchestrator | 2026-01-17 01:05:50.128482 | orchestrator | TASK [designate : include_tasks] *********************************************** 2026-01-17 01:05:50.128489 | orchestrator | Saturday 17 January 2026 01:04:26 +0000 (0:00:05.485) 0:01:40.959 ****** 2026-01-17 01:05:50.128495 | orchestrator | skipping: [testbed-node-0] 2026-01-17 01:05:50.128500 | orchestrator | skipping: [testbed-node-1] 2026-01-17 01:05:50.128506 | orchestrator | skipping: [testbed-node-2] 2026-01-17 01:05:50.128511 | orchestrator | 2026-01-17 01:05:50.128517 | orchestrator | TASK [designate : Creating Designate databases] ******************************** 2026-01-17 01:05:50.128524 | orchestrator | Saturday 17 January 2026 01:04:27 +0000 (0:00:00.788) 0:01:41.748 ****** 2026-01-17 01:05:50.128531 | orchestrator | changed: [testbed-node-0] => (item=designate) 2026-01-17 01:05:50.128537 | orchestrator | 2026-01-17 01:05:50.128543 | orchestrator | TASK [designate : Creating Designate databases user and setting permissions] *** 2026-01-17 01:05:50.128549 | orchestrator | Saturday 17 January 2026 01:04:30 +0000 (0:00:02.655) 0:01:44.403 ****** 2026-01-17 01:05:50.128555 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-01-17 01:05:50.128561 | orchestrator | changed: [testbed-node-0 -> {{ groups['designate-central'][0] }}] 2026-01-17 01:05:50.128567 | orchestrator | 2026-01-17 01:05:50.128574 | orchestrator | TASK [designate : Running Designate bootstrap container] *********************** 2026-01-17 01:05:50.128580 | orchestrator | Saturday 17 January 2026 01:04:33 +0000 (0:00:03.065) 0:01:47.468 ****** 2026-01-17 01:05:50.128587 | orchestrator | changed: [testbed-node-0] 2026-01-17 01:05:50.128593 | orchestrator | 2026-01-17 01:05:50.128599 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2026-01-17 01:05:50.128610 | orchestrator | Saturday 17 January 2026 01:04:47 +0000 (0:00:14.160) 0:02:01.628 ****** 2026-01-17 01:05:50.128614 | orchestrator | 2026-01-17 01:05:50.128618 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2026-01-17 01:05:50.128622 | orchestrator | Saturday 17 January 2026 01:04:47 +0000 (0:00:00.262) 0:02:01.891 ****** 2026-01-17 01:05:50.128626 | orchestrator | 2026-01-17 01:05:50.128630 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2026-01-17 01:05:50.128634 | orchestrator | Saturday 17 January 2026 01:04:47 +0000 (0:00:00.065) 0:02:01.956 ****** 2026-01-17 01:05:50.128638 | orchestrator | 2026-01-17 01:05:50.128642 | orchestrator | RUNNING HANDLER [designate : Restart designate-backend-bind9 container] ******** 2026-01-17 01:05:50.128645 | orchestrator | Saturday 17 January 2026 01:04:47 +0000 (0:00:00.069) 0:02:02.025 ****** 2026-01-17 01:05:50.128649 | orchestrator | changed: [testbed-node-0] 2026-01-17 01:05:50.128653 | orchestrator | changed: [testbed-node-2] 2026-01-17 01:05:50.128657 | orchestrator | changed: [testbed-node-1] 2026-01-17 01:05:50.128661 | orchestrator | 2026-01-17 01:05:50.128665 | orchestrator | RUNNING HANDLER [designate : Restart designate-api container] ****************** 2026-01-17 01:05:50.128669 | orchestrator | Saturday 17 January 2026 01:04:57 +0000 (0:00:09.755) 0:02:11.781 ****** 2026-01-17 01:05:50.128675 | orchestrator | changed: [testbed-node-0] 2026-01-17 01:05:50.128681 | orchestrator | changed: [testbed-node-1] 2026-01-17 01:05:50.128687 | orchestrator | changed: [testbed-node-2] 2026-01-17 01:05:50.128692 | orchestrator | 2026-01-17 01:05:50.128698 | orchestrator | RUNNING HANDLER [designate : Restart designate-central container] ************** 2026-01-17 01:05:50.128704 | orchestrator | Saturday 17 January 2026 01:05:04 +0000 (0:00:06.694) 0:02:18.476 ****** 2026-01-17 01:05:50.128710 | orchestrator | changed: [testbed-node-0] 2026-01-17 01:05:50.128717 | orchestrator | changed: [testbed-node-2] 2026-01-17 01:05:50.128723 | orchestrator | changed: [testbed-node-1] 2026-01-17 01:05:50.128729 | orchestrator | 2026-01-17 01:05:50.128736 | orchestrator | RUNNING HANDLER [designate : Restart designate-producer container] ************* 2026-01-17 01:05:50.128742 | orchestrator | Saturday 17 January 2026 01:05:17 +0000 (0:00:12.758) 0:02:31.234 ****** 2026-01-17 01:05:50.128754 | orchestrator | changed: [testbed-node-0] 2026-01-17 01:05:50.128760 | orchestrator | changed: [testbed-node-1] 2026-01-17 01:05:50.128766 | orchestrator | changed: [testbed-node-2] 2026-01-17 01:05:50.128773 | orchestrator | 2026-01-17 01:05:50.128777 | orchestrator | RUNNING HANDLER [designate : Restart designate-mdns container] ***************** 2026-01-17 01:05:50.128781 | orchestrator | Saturday 17 January 2026 01:05:22 +0000 (0:00:05.760) 0:02:36.995 ****** 2026-01-17 01:05:50.128785 | orchestrator | changed: [testbed-node-0] 2026-01-17 01:05:50.128788 | orchestrator | changed: [testbed-node-1] 2026-01-17 01:05:50.128792 | orchestrator | changed: [testbed-node-2] 2026-01-17 01:05:50.128796 | orchestrator | 2026-01-17 01:05:50.128800 | orchestrator | RUNNING HANDLER [designate : Restart designate-worker container] *************** 2026-01-17 01:05:50.128803 | orchestrator | Saturday 17 January 2026 01:05:28 +0000 (0:00:05.754) 0:02:42.750 ****** 2026-01-17 01:05:50.128808 | orchestrator | changed: [testbed-node-0] 2026-01-17 01:05:50.128814 | orchestrator | changed: [testbed-node-2] 2026-01-17 01:05:50.128819 | orchestrator | changed: [testbed-node-1] 2026-01-17 01:05:50.128825 | orchestrator | 2026-01-17 01:05:50.128832 | orchestrator | TASK [designate : Non-destructive DNS pools update] **************************** 2026-01-17 01:05:50.128838 | orchestrator | Saturday 17 January 2026 01:05:39 +0000 (0:00:10.627) 0:02:53.378 ****** 2026-01-17 01:05:50.128845 | orchestrator | changed: [testbed-node-0] 2026-01-17 01:05:50.128851 | orchestrator | 2026-01-17 01:05:50.128857 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-17 01:05:50.128864 | orchestrator | testbed-node-0 : ok=29  changed=23  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-01-17 01:05:50.128870 | orchestrator | testbed-node-1 : ok=19  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-01-17 01:05:50.128888 | orchestrator | testbed-node-2 : ok=19  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-01-17 01:05:50.128895 | orchestrator | 2026-01-17 01:05:50.128901 | orchestrator | 2026-01-17 01:05:50.128907 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-17 01:05:50.128913 | orchestrator | Saturday 17 January 2026 01:05:47 +0000 (0:00:08.018) 0:03:01.396 ****** 2026-01-17 01:05:50.128919 | orchestrator | =============================================================================== 2026-01-17 01:05:50.128923 | orchestrator | designate : Copying over designate.conf -------------------------------- 22.73s 2026-01-17 01:05:50.128927 | orchestrator | designate : Running Designate bootstrap container ---------------------- 14.16s 2026-01-17 01:05:50.128930 | orchestrator | designate : Restart designate-central container ------------------------ 12.76s 2026-01-17 01:05:50.128934 | orchestrator | designate : Restart designate-worker container ------------------------- 10.63s 2026-01-17 01:05:50.128938 | orchestrator | designate : Restart designate-backend-bind9 container ------------------- 9.76s 2026-01-17 01:05:50.128942 | orchestrator | designate : Copying over pools.yaml ------------------------------------- 8.91s 2026-01-17 01:05:50.128946 | orchestrator | designate : Copying over config.json files for services ----------------- 8.34s 2026-01-17 01:05:50.128949 | orchestrator | designate : Non-destructive DNS pools update ---------------------------- 8.02s 2026-01-17 01:05:50.128953 | orchestrator | service-ks-register : designate | Creating endpoints -------------------- 7.79s 2026-01-17 01:05:50.128957 | orchestrator | designate : Restart designate-api container ----------------------------- 6.69s 2026-01-17 01:05:50.128962 | orchestrator | service-cert-copy : designate | Copying over extra CA certificates ------ 6.02s 2026-01-17 01:05:50.128968 | orchestrator | designate : Restart designate-producer container ------------------------ 5.76s 2026-01-17 01:05:50.128974 | orchestrator | designate : Restart designate-mdns container ---------------------------- 5.75s 2026-01-17 01:05:50.128979 | orchestrator | designate : Check designate containers ---------------------------------- 5.48s 2026-01-17 01:05:50.128986 | orchestrator | service-ks-register : designate | Creating users ------------------------ 4.16s 2026-01-17 01:05:50.128992 | orchestrator | service-ks-register : designate | Creating roles ------------------------ 4.11s 2026-01-17 01:05:50.128998 | orchestrator | service-ks-register : designate | Granting user roles ------------------- 3.94s 2026-01-17 01:05:50.129004 | orchestrator | designate : Copying over rndc.key --------------------------------------- 3.79s 2026-01-17 01:05:50.129010 | orchestrator | designate : Copying over rndc.conf -------------------------------------- 3.74s 2026-01-17 01:05:50.129017 | orchestrator | service-ks-register : designate | Creating projects --------------------- 3.73s 2026-01-17 01:05:53.171756 | orchestrator | 2026-01-17 01:05:53 | INFO  | Task e6c45b1b-c67c-45dd-bbc2-5d9c7663383c is in state STARTED 2026-01-17 01:05:53.171827 | orchestrator | 2026-01-17 01:05:53 | INFO  | Task 945b4aeb-9733-435c-98f8-dedd4708352b is in state STARTED 2026-01-17 01:05:53.171963 | orchestrator | 2026-01-17 01:05:53 | INFO  | Task 7c0cfeff-5f53-4e00-8a2f-3700890c45c2 is in state STARTED 2026-01-17 01:05:53.172997 | orchestrator | 2026-01-17 01:05:53 | INFO  | Task 6ab6a137-e94f-4a62-9ab1-2d2eb6ed263c is in state STARTED 2026-01-17 01:05:53.173967 | orchestrator | 2026-01-17 01:05:53 | INFO  | Task 65437b38-f15f-48c3-8b3b-c3322790da8e is in state SUCCESS 2026-01-17 01:05:53.176053 | orchestrator | 2026-01-17 01:05:53 | INFO  | Wait 1 second(s) until the next check 2026-01-17 01:05:53.177191 | orchestrator | 2026-01-17 01:05:53.177243 | orchestrator | 2026-01-17 01:05:53.177254 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-17 01:05:53.177260 | orchestrator | 2026-01-17 01:05:53.177281 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-01-17 01:05:53.177289 | orchestrator | Saturday 17 January 2026 01:04:38 +0000 (0:00:00.267) 0:00:00.267 ****** 2026-01-17 01:05:53.177335 | orchestrator | ok: [testbed-node-0] 2026-01-17 01:05:53.177343 | orchestrator | ok: [testbed-node-1] 2026-01-17 01:05:53.177347 | orchestrator | ok: [testbed-node-2] 2026-01-17 01:05:53.177351 | orchestrator | 2026-01-17 01:05:53.177355 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-17 01:05:53.177359 | orchestrator | Saturday 17 January 2026 01:04:38 +0000 (0:00:00.277) 0:00:00.545 ****** 2026-01-17 01:05:53.177363 | orchestrator | ok: [testbed-node-0] => (item=enable_placement_True) 2026-01-17 01:05:53.177369 | orchestrator | ok: [testbed-node-1] => (item=enable_placement_True) 2026-01-17 01:05:53.177376 | orchestrator | ok: [testbed-node-2] => (item=enable_placement_True) 2026-01-17 01:05:53.177381 | orchestrator | 2026-01-17 01:05:53.177387 | orchestrator | PLAY [Apply role placement] **************************************************** 2026-01-17 01:05:53.177393 | orchestrator | 2026-01-17 01:05:53.177399 | orchestrator | TASK [placement : include_tasks] *********************************************** 2026-01-17 01:05:53.177404 | orchestrator | Saturday 17 January 2026 01:04:38 +0000 (0:00:00.396) 0:00:00.941 ****** 2026-01-17 01:05:53.177410 | orchestrator | included: /ansible/roles/placement/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-17 01:05:53.177417 | orchestrator | 2026-01-17 01:05:53.177423 | orchestrator | TASK [service-ks-register : placement | Creating services] ********************* 2026-01-17 01:05:53.177430 | orchestrator | Saturday 17 January 2026 01:04:39 +0000 (0:00:00.510) 0:00:01.452 ****** 2026-01-17 01:05:53.177437 | orchestrator | changed: [testbed-node-0] => (item=placement (placement)) 2026-01-17 01:05:53.177443 | orchestrator | 2026-01-17 01:05:53.177449 | orchestrator | TASK [service-ks-register : placement | Creating endpoints] ******************** 2026-01-17 01:05:53.177455 | orchestrator | Saturday 17 January 2026 01:04:42 +0000 (0:00:03.718) 0:00:05.171 ****** 2026-01-17 01:05:53.177461 | orchestrator | changed: [testbed-node-0] => (item=placement -> https://api-int.testbed.osism.xyz:8780 -> internal) 2026-01-17 01:05:53.177468 | orchestrator | changed: [testbed-node-0] => (item=placement -> https://api.testbed.osism.xyz:8780 -> public) 2026-01-17 01:05:53.177474 | orchestrator | 2026-01-17 01:05:53.177480 | orchestrator | TASK [service-ks-register : placement | Creating projects] ********************* 2026-01-17 01:05:53.177488 | orchestrator | Saturday 17 January 2026 01:04:49 +0000 (0:00:06.346) 0:00:11.517 ****** 2026-01-17 01:05:53.177492 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-01-17 01:05:53.177496 | orchestrator | 2026-01-17 01:05:53.177500 | orchestrator | TASK [service-ks-register : placement | Creating users] ************************ 2026-01-17 01:05:53.177503 | orchestrator | Saturday 17 January 2026 01:04:52 +0000 (0:00:03.471) 0:00:14.988 ****** 2026-01-17 01:05:53.177507 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-01-17 01:05:53.177511 | orchestrator | changed: [testbed-node-0] => (item=placement -> service) 2026-01-17 01:05:53.177515 | orchestrator | 2026-01-17 01:05:53.177518 | orchestrator | TASK [service-ks-register : placement | Creating roles] ************************ 2026-01-17 01:05:53.177522 | orchestrator | Saturday 17 January 2026 01:04:56 +0000 (0:00:03.681) 0:00:18.670 ****** 2026-01-17 01:05:53.177526 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-01-17 01:05:53.177530 | orchestrator | 2026-01-17 01:05:53.177534 | orchestrator | TASK [service-ks-register : placement | Granting user roles] ******************* 2026-01-17 01:05:53.177538 | orchestrator | Saturday 17 January 2026 01:05:00 +0000 (0:00:03.649) 0:00:22.319 ****** 2026-01-17 01:05:53.177542 | orchestrator | changed: [testbed-node-0] => (item=placement -> service -> admin) 2026-01-17 01:05:53.177546 | orchestrator | 2026-01-17 01:05:53.177549 | orchestrator | TASK [placement : include_tasks] *********************************************** 2026-01-17 01:05:53.177553 | orchestrator | Saturday 17 January 2026 01:05:03 +0000 (0:00:03.643) 0:00:25.963 ****** 2026-01-17 01:05:53.177557 | orchestrator | skipping: [testbed-node-0] 2026-01-17 01:05:53.177561 | orchestrator | skipping: [testbed-node-1] 2026-01-17 01:05:53.177565 | orchestrator | skipping: [testbed-node-2] 2026-01-17 01:05:53.177568 | orchestrator | 2026-01-17 01:05:53.177572 | orchestrator | TASK [placement : Ensuring config directories exist] *************************** 2026-01-17 01:05:53.177581 | orchestrator | Saturday 17 January 2026 01:05:04 +0000 (0:00:00.287) 0:00:26.250 ****** 2026-01-17 01:05:53.177588 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-01-17 01:05:53.177609 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-01-17 01:05:53.177614 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-01-17 01:05:53.177618 | orchestrator | 2026-01-17 01:05:53.177622 | orchestrator | TASK [placement : Check if policies shall be overwritten] ********************** 2026-01-17 01:05:53.177626 | orchestrator | Saturday 17 January 2026 01:05:05 +0000 (0:00:01.734) 0:00:27.985 ****** 2026-01-17 01:05:53.177630 | orchestrator | skipping: [testbed-node-0] 2026-01-17 01:05:53.177633 | orchestrator | 2026-01-17 01:05:53.177637 | orchestrator | TASK [placement : Set placement policy file] *********************************** 2026-01-17 01:05:53.177641 | orchestrator | Saturday 17 January 2026 01:05:06 +0000 (0:00:00.284) 0:00:28.269 ****** 2026-01-17 01:05:53.177645 | orchestrator | skipping: [testbed-node-0] 2026-01-17 01:05:53.177649 | orchestrator | skipping: [testbed-node-1] 2026-01-17 01:05:53.177653 | orchestrator | skipping: [testbed-node-2] 2026-01-17 01:05:53.177656 | orchestrator | 2026-01-17 01:05:53.177660 | orchestrator | TASK [placement : include_tasks] *********************************************** 2026-01-17 01:05:53.177664 | orchestrator | Saturday 17 January 2026 01:05:07 +0000 (0:00:00.983) 0:00:29.252 ****** 2026-01-17 01:05:53.177668 | orchestrator | included: /ansible/roles/placement/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-17 01:05:53.177678 | orchestrator | 2026-01-17 01:05:53.177682 | orchestrator | TASK [service-cert-copy : placement | Copying over extra CA certificates] ****** 2026-01-17 01:05:53.177686 | orchestrator | Saturday 17 January 2026 01:05:08 +0000 (0:00:00.962) 0:00:30.215 ****** 2026-01-17 01:05:53.177690 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-01-17 01:05:53.177702 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-01-17 01:05:53.177707 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-01-17 01:05:53.177712 | orchestrator | 2026-01-17 01:05:53.177718 | orchestrator | TASK [service-cert-copy : placement | Copying over backend internal TLS certificate] *** 2026-01-17 01:05:53.177725 | orchestrator | Saturday 17 January 2026 01:05:10 +0000 (0:00:02.173) 0:00:32.388 ****** 2026-01-17 01:05:53.177731 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-01-17 01:05:53.177743 | orchestrator | skipping: [testbed-node-0] 2026-01-17 01:05:53.177750 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-01-17 01:05:53.177762 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-01-17 01:05:53.177768 | orchestrator | skipping: [testbed-node-1] 2026-01-17 01:05:53.177776 | orchestrator | skipping: [testbed-node-2] 2026-01-17 01:05:53.177780 | orchestrator | 2026-01-17 01:05:53.177791 | orchestrator | TASK [service-cert-copy : placement | Copying over backend internal TLS key] *** 2026-01-17 01:05:53.177797 | orchestrator | Saturday 17 January 2026 01:05:10 +0000 (0:00:00.764) 0:00:33.153 ****** 2026-01-17 01:05:53.177804 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-01-17 01:05:53.177811 | orchestrator | skipping: [testbed-node-0] 2026-01-17 01:05:53.177819 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-01-17 01:05:53.177831 | orchestrator | skipping: [testbed-node-1] 2026-01-17 01:05:53.177839 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-01-17 01:05:53.177844 | orchestrator | skipping: [testbed-node-2] 2026-01-17 01:05:53.177849 | orchestrator | 2026-01-17 01:05:53.177853 | orchestrator | TASK [placement : Copying over config.json files for services] ***************** 2026-01-17 01:05:53.177858 | orchestrator | Saturday 17 January 2026 01:05:12 +0000 (0:00:01.248) 0:00:34.401 ****** 2026-01-17 01:05:53.177866 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-01-17 01:05:53.177874 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-01-17 01:05:53.177879 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-01-17 01:05:53.177887 | orchestrator | 2026-01-17 01:05:53.177892 | orchestrator | TASK [placement : Copying over placement.conf] ********************************* 2026-01-17 01:05:53.177896 | orchestrator | Saturday 17 January 2026 01:05:13 +0000 (0:00:01.589) 0:00:35.990 ****** 2026-01-17 01:05:53.177901 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-01-17 01:05:53.177905 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-01-17 01:05:53.177916 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-01-17 01:05:53.177921 | orchestrator | 2026-01-17 01:05:53.177926 | orchestrator | TASK [placement : Copying over placement-api wsgi configuration] *************** 2026-01-17 01:05:53.177930 | orchestrator | Saturday 17 January 2026 01:05:16 +0000 (0:00:02.575) 0:00:38.566 ****** 2026-01-17 01:05:53.177935 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2026-01-17 01:05:53.177939 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2026-01-17 01:05:53.177944 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2026-01-17 01:05:53.177948 | orchestrator | 2026-01-17 01:05:53.177953 | orchestrator | TASK [placement : Copying over migrate-db.rc.j2 configuration] ***************** 2026-01-17 01:05:53.177961 | orchestrator | Saturday 17 January 2026 01:05:17 +0000 (0:00:01.268) 0:00:39.834 ****** 2026-01-17 01:05:53.177965 | orchestrator | changed: [testbed-node-0] 2026-01-17 01:05:53.177970 | orchestrator | changed: [testbed-node-2] 2026-01-17 01:05:53.177974 | orchestrator | changed: [testbed-node-1] 2026-01-17 01:05:53.177979 | orchestrator | 2026-01-17 01:05:53.177983 | orchestrator | TASK [placement : Copying over existing policy file] *************************** 2026-01-17 01:05:53.177987 | orchestrator | Saturday 17 January 2026 01:05:19 +0000 (0:00:01.518) 0:00:41.353 ****** 2026-01-17 01:05:53.177992 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-01-17 01:05:53.177996 | orchestrator | skipping: [testbed-node-0] 2026-01-17 01:05:53.178001 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-01-17 01:05:53.178006 | orchestrator | skipping: [testbed-node-1] 2026-01-17 01:05:53.178052 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-01-17 01:05:53.178059 | orchestrator | skipping: [testbed-node-2] 2026-01-17 01:05:53.178063 | orchestrator | 2026-01-17 01:05:53.178068 | orchestrator | TASK [placement : Check placement containers] ********************************** 2026-01-17 01:05:53.178072 | orchestrator | Saturday 17 January 2026 01:05:19 +0000 (0:00:00.422) 0:00:41.776 ****** 2026-01-17 01:05:53.178076 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-01-17 01:05:53.178089 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-01-17 01:05:53.178093 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-01-17 01:05:53.178098 | orchestrator | 2026-01-17 01:05:53.178103 | orchestrator | TASK [placement : Creating placement databases] ******************************** 2026-01-17 01:05:53.178107 | orchestrator | Saturday 17 January 2026 01:05:20 +0000 (0:00:01.164) 0:00:42.941 ****** 2026-01-17 01:05:53.178112 | orchestrator | changed: [testbed-node-0] 2026-01-17 01:05:53.178116 | orchestrator | 2026-01-17 01:05:53.178120 | orchestrator | TASK [placement : Creating placement databases user and setting permissions] *** 2026-01-17 01:05:53.178125 | orchestrator | Saturday 17 January 2026 01:05:23 +0000 (0:00:03.047) 0:00:45.988 ****** 2026-01-17 01:05:53.178129 | orchestrator | changed: [testbed-node-0] 2026-01-17 01:05:53.178134 | orchestrator | 2026-01-17 01:05:53.178138 | orchestrator | TASK [placement : Running placement bootstrap container] *********************** 2026-01-17 01:05:53.178142 | orchestrator | Saturday 17 January 2026 01:05:26 +0000 (0:00:02.459) 0:00:48.448 ****** 2026-01-17 01:05:53.178147 | orchestrator | changed: [testbed-node-0] 2026-01-17 01:05:53.178151 | orchestrator | 2026-01-17 01:05:53.178156 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2026-01-17 01:05:53.178161 | orchestrator | Saturday 17 January 2026 01:05:40 +0000 (0:00:14.331) 0:01:02.779 ****** 2026-01-17 01:05:53.178165 | orchestrator | 2026-01-17 01:05:53.178170 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2026-01-17 01:05:53.178174 | orchestrator | Saturday 17 January 2026 01:05:40 +0000 (0:00:00.064) 0:01:02.844 ****** 2026-01-17 01:05:53.178179 | orchestrator | 2026-01-17 01:05:53.178186 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2026-01-17 01:05:53.178193 | orchestrator | Saturday 17 January 2026 01:05:40 +0000 (0:00:00.060) 0:01:02.904 ****** 2026-01-17 01:05:53.178197 | orchestrator | 2026-01-17 01:05:53.178204 | orchestrator | RUNNING HANDLER [placement : Restart placement-api container] ****************** 2026-01-17 01:05:53.178209 | orchestrator | Saturday 17 January 2026 01:05:40 +0000 (0:00:00.068) 0:01:02.973 ****** 2026-01-17 01:05:53.178213 | orchestrator | changed: [testbed-node-0] 2026-01-17 01:05:53.178216 | orchestrator | changed: [testbed-node-1] 2026-01-17 01:05:53.178221 | orchestrator | changed: [testbed-node-2] 2026-01-17 01:05:53.178225 | orchestrator | 2026-01-17 01:05:53.178229 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-17 01:05:53.178234 | orchestrator | testbed-node-0 : ok=21  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-01-17 01:05:53.178240 | orchestrator | testbed-node-1 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-01-17 01:05:53.178243 | orchestrator | testbed-node-2 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-01-17 01:05:53.178248 | orchestrator | 2026-01-17 01:05:53.178251 | orchestrator | 2026-01-17 01:05:53.178255 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-17 01:05:53.178259 | orchestrator | Saturday 17 January 2026 01:05:51 +0000 (0:00:10.783) 0:01:13.756 ****** 2026-01-17 01:05:53.178263 | orchestrator | =============================================================================== 2026-01-17 01:05:53.178267 | orchestrator | placement : Running placement bootstrap container ---------------------- 14.33s 2026-01-17 01:05:53.178271 | orchestrator | placement : Restart placement-api container ---------------------------- 10.78s 2026-01-17 01:05:53.178275 | orchestrator | service-ks-register : placement | Creating endpoints -------------------- 6.35s 2026-01-17 01:05:53.178278 | orchestrator | service-ks-register : placement | Creating services --------------------- 3.72s 2026-01-17 01:05:53.178282 | orchestrator | service-ks-register : placement | Creating users ------------------------ 3.68s 2026-01-17 01:05:53.178286 | orchestrator | service-ks-register : placement | Creating roles ------------------------ 3.65s 2026-01-17 01:05:53.178290 | orchestrator | service-ks-register : placement | Granting user roles ------------------- 3.64s 2026-01-17 01:05:53.178294 | orchestrator | service-ks-register : placement | Creating projects --------------------- 3.47s 2026-01-17 01:05:53.178298 | orchestrator | placement : Creating placement databases -------------------------------- 3.05s 2026-01-17 01:05:53.178302 | orchestrator | placement : Copying over placement.conf --------------------------------- 2.58s 2026-01-17 01:05:53.178321 | orchestrator | placement : Creating placement databases user and setting permissions --- 2.46s 2026-01-17 01:05:53.178325 | orchestrator | service-cert-copy : placement | Copying over extra CA certificates ------ 2.17s 2026-01-17 01:05:53.178329 | orchestrator | placement : Ensuring config directories exist --------------------------- 1.73s 2026-01-17 01:05:53.178333 | orchestrator | placement : Copying over config.json files for services ----------------- 1.59s 2026-01-17 01:05:53.178337 | orchestrator | placement : Copying over migrate-db.rc.j2 configuration ----------------- 1.52s 2026-01-17 01:05:53.178341 | orchestrator | placement : Copying over placement-api wsgi configuration --------------- 1.27s 2026-01-17 01:05:53.178345 | orchestrator | service-cert-copy : placement | Copying over backend internal TLS key --- 1.25s 2026-01-17 01:05:53.178349 | orchestrator | placement : Check placement containers ---------------------------------- 1.16s 2026-01-17 01:05:53.178352 | orchestrator | placement : Set placement policy file ----------------------------------- 0.98s 2026-01-17 01:05:53.178357 | orchestrator | placement : include_tasks ----------------------------------------------- 0.96s 2026-01-17 01:05:56.216537 | orchestrator | 2026-01-17 01:05:56 | INFO  | Task e6c45b1b-c67c-45dd-bbc2-5d9c7663383c is in state STARTED 2026-01-17 01:05:56.218046 | orchestrator | 2026-01-17 01:05:56 | INFO  | Task d5dd230b-b026-42aa-a8bd-49250b5485e8 is in state STARTED 2026-01-17 01:05:56.220966 | orchestrator | 2026-01-17 01:05:56 | INFO  | Task 945b4aeb-9733-435c-98f8-dedd4708352b is in state STARTED 2026-01-17 01:05:56.223056 | orchestrator | 2026-01-17 01:05:56 | INFO  | Task 7c0cfeff-5f53-4e00-8a2f-3700890c45c2 is in state STARTED 2026-01-17 01:05:56.225015 | orchestrator | 2026-01-17 01:05:56 | INFO  | Task 6ab6a137-e94f-4a62-9ab1-2d2eb6ed263c is in state SUCCESS 2026-01-17 01:05:56.225051 | orchestrator | 2026-01-17 01:05:56 | INFO  | Wait 1 second(s) until the next check 2026-01-17 01:05:59.258516 | orchestrator | 2026-01-17 01:05:59 | INFO  | Task e6c45b1b-c67c-45dd-bbc2-5d9c7663383c is in state STARTED 2026-01-17 01:05:59.258699 | orchestrator | 2026-01-17 01:05:59 | INFO  | Task d5dd230b-b026-42aa-a8bd-49250b5485e8 is in state STARTED 2026-01-17 01:05:59.263262 | orchestrator | 2026-01-17 01:05:59 | INFO  | Task 945b4aeb-9733-435c-98f8-dedd4708352b is in state STARTED 2026-01-17 01:05:59.263398 | orchestrator | 2026-01-17 01:05:59 | INFO  | Task 7c0cfeff-5f53-4e00-8a2f-3700890c45c2 is in state STARTED 2026-01-17 01:05:59.263407 | orchestrator | 2026-01-17 01:05:59 | INFO  | Wait 1 second(s) until the next check 2026-01-17 01:06:02.314933 | orchestrator | 2026-01-17 01:06:02 | INFO  | Task e6c45b1b-c67c-45dd-bbc2-5d9c7663383c is in state STARTED 2026-01-17 01:06:02.315021 | orchestrator | 2026-01-17 01:06:02 | INFO  | Task d5dd230b-b026-42aa-a8bd-49250b5485e8 is in state STARTED 2026-01-17 01:06:02.315028 | orchestrator | 2026-01-17 01:06:02 | INFO  | Task 945b4aeb-9733-435c-98f8-dedd4708352b is in state STARTED 2026-01-17 01:06:02.315033 | orchestrator | 2026-01-17 01:06:02 | INFO  | Task 7c0cfeff-5f53-4e00-8a2f-3700890c45c2 is in state STARTED 2026-01-17 01:06:02.315041 | orchestrator | 2026-01-17 01:06:02 | INFO  | Wait 1 second(s) until the next check 2026-01-17 01:06:05.349514 | orchestrator | 2026-01-17 01:06:05 | INFO  | Task e6c45b1b-c67c-45dd-bbc2-5d9c7663383c is in state STARTED 2026-01-17 01:06:05.351665 | orchestrator | 2026-01-17 01:06:05 | INFO  | Task d5dd230b-b026-42aa-a8bd-49250b5485e8 is in state STARTED 2026-01-17 01:06:05.352937 | orchestrator | 2026-01-17 01:06:05 | INFO  | Task 945b4aeb-9733-435c-98f8-dedd4708352b is in state STARTED 2026-01-17 01:06:05.353712 | orchestrator | 2026-01-17 01:06:05 | INFO  | Task 7c0cfeff-5f53-4e00-8a2f-3700890c45c2 is in state STARTED 2026-01-17 01:06:05.354151 | orchestrator | 2026-01-17 01:06:05 | INFO  | Wait 1 second(s) until the next check 2026-01-17 01:06:08.546219 | orchestrator | 2026-01-17 01:06:08 | INFO  | Task e6c45b1b-c67c-45dd-bbc2-5d9c7663383c is in state STARTED 2026-01-17 01:06:08.556729 | orchestrator | 2026-01-17 01:06:08 | INFO  | Task d5dd230b-b026-42aa-a8bd-49250b5485e8 is in state STARTED 2026-01-17 01:06:08.638860 | orchestrator | 2026-01-17 01:06:08 | INFO  | Task 945b4aeb-9733-435c-98f8-dedd4708352b is in state STARTED 2026-01-17 01:06:08.645104 | orchestrator | 2026-01-17 01:06:08 | INFO  | Task 7c0cfeff-5f53-4e00-8a2f-3700890c45c2 is in state STARTED 2026-01-17 01:06:08.645174 | orchestrator | 2026-01-17 01:06:08 | INFO  | Wait 1 second(s) until the next check 2026-01-17 01:06:11.711828 | orchestrator | 2026-01-17 01:06:11 | INFO  | Task e6c45b1b-c67c-45dd-bbc2-5d9c7663383c is in state STARTED 2026-01-17 01:06:11.711897 | orchestrator | 2026-01-17 01:06:11 | INFO  | Task d5dd230b-b026-42aa-a8bd-49250b5485e8 is in state STARTED 2026-01-17 01:06:11.711904 | orchestrator | 2026-01-17 01:06:11 | INFO  | Task 945b4aeb-9733-435c-98f8-dedd4708352b is in state STARTED 2026-01-17 01:06:11.711910 | orchestrator | 2026-01-17 01:06:11 | INFO  | Task 7c0cfeff-5f53-4e00-8a2f-3700890c45c2 is in state STARTED 2026-01-17 01:06:11.711935 | orchestrator | 2026-01-17 01:06:11 | INFO  | Wait 1 second(s) until the next check 2026-01-17 01:06:14.750756 | orchestrator | 2026-01-17 01:06:14 | INFO  | Task e6c45b1b-c67c-45dd-bbc2-5d9c7663383c is in state STARTED 2026-01-17 01:06:14.751030 | orchestrator | 2026-01-17 01:06:14 | INFO  | Task d5dd230b-b026-42aa-a8bd-49250b5485e8 is in state STARTED 2026-01-17 01:06:14.753203 | orchestrator | 2026-01-17 01:06:14 | INFO  | Task 945b4aeb-9733-435c-98f8-dedd4708352b is in state STARTED 2026-01-17 01:06:14.753256 | orchestrator | 2026-01-17 01:06:14 | INFO  | Task 7c0cfeff-5f53-4e00-8a2f-3700890c45c2 is in state STARTED 2026-01-17 01:06:14.753265 | orchestrator | 2026-01-17 01:06:14 | INFO  | Wait 1 second(s) until the next check 2026-01-17 01:06:17.817947 | orchestrator | 2026-01-17 01:06:17 | INFO  | Task e6c45b1b-c67c-45dd-bbc2-5d9c7663383c is in state STARTED 2026-01-17 01:06:17.818069 | orchestrator | 2026-01-17 01:06:17 | INFO  | Task d5dd230b-b026-42aa-a8bd-49250b5485e8 is in state STARTED 2026-01-17 01:06:17.818078 | orchestrator | 2026-01-17 01:06:17 | INFO  | Task 945b4aeb-9733-435c-98f8-dedd4708352b is in state STARTED 2026-01-17 01:06:17.819745 | orchestrator | 2026-01-17 01:06:17 | INFO  | Task 7c0cfeff-5f53-4e00-8a2f-3700890c45c2 is in state STARTED 2026-01-17 01:06:17.819795 | orchestrator | 2026-01-17 01:06:17 | INFO  | Wait 1 second(s) until the next check 2026-01-17 01:06:20.844100 | orchestrator | 2026-01-17 01:06:20 | INFO  | Task e6c45b1b-c67c-45dd-bbc2-5d9c7663383c is in state STARTED 2026-01-17 01:06:20.846319 | orchestrator | 2026-01-17 01:06:20 | INFO  | Task d5dd230b-b026-42aa-a8bd-49250b5485e8 is in state STARTED 2026-01-17 01:06:20.847237 | orchestrator | 2026-01-17 01:06:20 | INFO  | Task 945b4aeb-9733-435c-98f8-dedd4708352b is in state STARTED 2026-01-17 01:06:20.848099 | orchestrator | 2026-01-17 01:06:20 | INFO  | Task 7c0cfeff-5f53-4e00-8a2f-3700890c45c2 is in state STARTED 2026-01-17 01:06:20.848436 | orchestrator | 2026-01-17 01:06:20 | INFO  | Wait 1 second(s) until the next check 2026-01-17 01:06:23.891117 | orchestrator | 2026-01-17 01:06:23 | INFO  | Task e6c45b1b-c67c-45dd-bbc2-5d9c7663383c is in state STARTED 2026-01-17 01:06:23.892588 | orchestrator | 2026-01-17 01:06:23 | INFO  | Task d5dd230b-b026-42aa-a8bd-49250b5485e8 is in state STARTED 2026-01-17 01:06:23.894650 | orchestrator | 2026-01-17 01:06:23 | INFO  | Task 945b4aeb-9733-435c-98f8-dedd4708352b is in state STARTED 2026-01-17 01:06:23.896818 | orchestrator | 2026-01-17 01:06:23 | INFO  | Task 7c0cfeff-5f53-4e00-8a2f-3700890c45c2 is in state STARTED 2026-01-17 01:06:23.896859 | orchestrator | 2026-01-17 01:06:23 | INFO  | Wait 1 second(s) until the next check 2026-01-17 01:06:26.935586 | orchestrator | 2026-01-17 01:06:26 | INFO  | Task e6c45b1b-c67c-45dd-bbc2-5d9c7663383c is in state STARTED 2026-01-17 01:06:26.938427 | orchestrator | 2026-01-17 01:06:26 | INFO  | Task d5dd230b-b026-42aa-a8bd-49250b5485e8 is in state STARTED 2026-01-17 01:06:26.938509 | orchestrator | 2026-01-17 01:06:26 | INFO  | Task 945b4aeb-9733-435c-98f8-dedd4708352b is in state STARTED 2026-01-17 01:06:26.938530 | orchestrator | 2026-01-17 01:06:26 | INFO  | Task 7c0cfeff-5f53-4e00-8a2f-3700890c45c2 is in state STARTED 2026-01-17 01:06:26.938538 | orchestrator | 2026-01-17 01:06:26 | INFO  | Wait 1 second(s) until the next check 2026-01-17 01:06:30.053668 | orchestrator | 2026-01-17 01:06:30 | INFO  | Task e6c45b1b-c67c-45dd-bbc2-5d9c7663383c is in state STARTED 2026-01-17 01:06:30.053728 | orchestrator | 2026-01-17 01:06:30 | INFO  | Task d5dd230b-b026-42aa-a8bd-49250b5485e8 is in state STARTED 2026-01-17 01:06:30.053738 | orchestrator | 2026-01-17 01:06:30 | INFO  | Task 945b4aeb-9733-435c-98f8-dedd4708352b is in state STARTED 2026-01-17 01:06:30.053763 | orchestrator | 2026-01-17 01:06:30 | INFO  | Task 7c0cfeff-5f53-4e00-8a2f-3700890c45c2 is in state STARTED 2026-01-17 01:06:30.053771 | orchestrator | 2026-01-17 01:06:30 | INFO  | Wait 1 second(s) until the next check 2026-01-17 01:06:33.070968 | orchestrator | 2026-01-17 01:06:33 | INFO  | Task e6c45b1b-c67c-45dd-bbc2-5d9c7663383c is in state STARTED 2026-01-17 01:06:33.071340 | orchestrator | 2026-01-17 01:06:33 | INFO  | Task d5dd230b-b026-42aa-a8bd-49250b5485e8 is in state STARTED 2026-01-17 01:06:33.073213 | orchestrator | 2026-01-17 01:06:33 | INFO  | Task 945b4aeb-9733-435c-98f8-dedd4708352b is in state STARTED 2026-01-17 01:06:33.073809 | orchestrator | 2026-01-17 01:06:33 | INFO  | Task 7c0cfeff-5f53-4e00-8a2f-3700890c45c2 is in state STARTED 2026-01-17 01:06:33.073824 | orchestrator | 2026-01-17 01:06:33 | INFO  | Wait 1 second(s) until the next check 2026-01-17 01:06:36.099670 | orchestrator | 2026-01-17 01:06:36 | INFO  | Task e6c45b1b-c67c-45dd-bbc2-5d9c7663383c is in state STARTED 2026-01-17 01:06:36.099751 | orchestrator | 2026-01-17 01:06:36 | INFO  | Task d5dd230b-b026-42aa-a8bd-49250b5485e8 is in state SUCCESS 2026-01-17 01:06:36.101476 | orchestrator | 2026-01-17 01:06:36 | INFO  | Task 945b4aeb-9733-435c-98f8-dedd4708352b is in state STARTED 2026-01-17 01:06:36.102225 | orchestrator | 2026-01-17 01:06:36 | INFO  | Task 7c0cfeff-5f53-4e00-8a2f-3700890c45c2 is in state STARTED 2026-01-17 01:06:36.102638 | orchestrator | 2026-01-17 01:06:36 | INFO  | Task 5909bcd5-75ba-42bb-9574-2889166ac98d is in state STARTED 2026-01-17 01:06:36.102664 | orchestrator | 2026-01-17 01:06:36 | INFO  | Wait 1 second(s) until the next check 2026-01-17 01:06:39.155568 | orchestrator | 2026-01-17 01:06:39 | INFO  | Task e6c45b1b-c67c-45dd-bbc2-5d9c7663383c is in state STARTED 2026-01-17 01:06:39.155615 | orchestrator | 2026-01-17 01:06:39 | INFO  | Task 945b4aeb-9733-435c-98f8-dedd4708352b is in state STARTED 2026-01-17 01:06:39.155621 | orchestrator | 2026-01-17 01:06:39 | INFO  | Task 7c0cfeff-5f53-4e00-8a2f-3700890c45c2 is in state STARTED 2026-01-17 01:06:39.155625 | orchestrator | 2026-01-17 01:06:39 | INFO  | Task 5909bcd5-75ba-42bb-9574-2889166ac98d is in state STARTED 2026-01-17 01:06:39.155629 | orchestrator | 2026-01-17 01:06:39 | INFO  | Wait 1 second(s) until the next check 2026-01-17 01:06:42.162307 | orchestrator | 2026-01-17 01:06:42 | INFO  | Task e6c45b1b-c67c-45dd-bbc2-5d9c7663383c is in state STARTED 2026-01-17 01:06:42.163042 | orchestrator | 2026-01-17 01:06:42 | INFO  | Task 945b4aeb-9733-435c-98f8-dedd4708352b is in state STARTED 2026-01-17 01:06:42.163071 | orchestrator | 2026-01-17 01:06:42 | INFO  | Task 7c0cfeff-5f53-4e00-8a2f-3700890c45c2 is in state STARTED 2026-01-17 01:06:42.165425 | orchestrator | 2026-01-17 01:06:42 | INFO  | Task 5909bcd5-75ba-42bb-9574-2889166ac98d is in state STARTED 2026-01-17 01:06:42.165466 | orchestrator | 2026-01-17 01:06:42 | INFO  | Wait 1 second(s) until the next check 2026-01-17 01:06:45.246730 | orchestrator | 2026-01-17 01:06:45 | INFO  | Task e6c45b1b-c67c-45dd-bbc2-5d9c7663383c is in state STARTED 2026-01-17 01:06:45.246822 | orchestrator | 2026-01-17 01:06:45 | INFO  | Task 945b4aeb-9733-435c-98f8-dedd4708352b is in state STARTED 2026-01-17 01:06:45.246829 | orchestrator | 2026-01-17 01:06:45 | INFO  | Task 7c0cfeff-5f53-4e00-8a2f-3700890c45c2 is in state STARTED 2026-01-17 01:06:45.246834 | orchestrator | 2026-01-17 01:06:45 | INFO  | Task 5909bcd5-75ba-42bb-9574-2889166ac98d is in state STARTED 2026-01-17 01:06:45.246838 | orchestrator | 2026-01-17 01:06:45 | INFO  | Wait 1 second(s) until the next check 2026-01-17 01:06:48.241339 | orchestrator | 2026-01-17 01:06:48 | INFO  | Task e6c45b1b-c67c-45dd-bbc2-5d9c7663383c is in state STARTED 2026-01-17 01:06:48.241431 | orchestrator | 2026-01-17 01:06:48 | INFO  | Task 945b4aeb-9733-435c-98f8-dedd4708352b is in state STARTED 2026-01-17 01:06:48.242991 | orchestrator | 2026-01-17 01:06:48 | INFO  | Task 7c0cfeff-5f53-4e00-8a2f-3700890c45c2 is in state STARTED 2026-01-17 01:06:48.243117 | orchestrator | 2026-01-17 01:06:48 | INFO  | Task 5909bcd5-75ba-42bb-9574-2889166ac98d is in state STARTED 2026-01-17 01:06:48.243128 | orchestrator | 2026-01-17 01:06:48 | INFO  | Wait 1 second(s) until the next check 2026-01-17 01:06:51.292692 | orchestrator | 2026-01-17 01:06:51 | INFO  | Task e6c45b1b-c67c-45dd-bbc2-5d9c7663383c is in state STARTED 2026-01-17 01:06:51.295414 | orchestrator | 2026-01-17 01:06:51 | INFO  | Task 945b4aeb-9733-435c-98f8-dedd4708352b is in state STARTED 2026-01-17 01:06:51.297515 | orchestrator | 2026-01-17 01:06:51 | INFO  | Task 7c0cfeff-5f53-4e00-8a2f-3700890c45c2 is in state STARTED 2026-01-17 01:06:51.302919 | orchestrator | 2026-01-17 01:06:51 | INFO  | Task 5909bcd5-75ba-42bb-9574-2889166ac98d is in state STARTED 2026-01-17 01:06:51.302970 | orchestrator | 2026-01-17 01:06:51 | INFO  | Wait 1 second(s) until the next check 2026-01-17 01:06:54.349571 | orchestrator | 2026-01-17 01:06:54 | INFO  | Task e6c45b1b-c67c-45dd-bbc2-5d9c7663383c is in state STARTED 2026-01-17 01:06:54.354827 | orchestrator | 2026-01-17 01:06:54 | INFO  | Task 945b4aeb-9733-435c-98f8-dedd4708352b is in state STARTED 2026-01-17 01:06:54.357417 | orchestrator | 2026-01-17 01:06:54 | INFO  | Task 7c0cfeff-5f53-4e00-8a2f-3700890c45c2 is in state STARTED 2026-01-17 01:06:54.360009 | orchestrator | 2026-01-17 01:06:54 | INFO  | Task 5909bcd5-75ba-42bb-9574-2889166ac98d is in state STARTED 2026-01-17 01:06:54.360055 | orchestrator | 2026-01-17 01:06:54 | INFO  | Wait 1 second(s) until the next check 2026-01-17 01:06:57.404264 | orchestrator | 2026-01-17 01:06:57 | INFO  | Task e6c45b1b-c67c-45dd-bbc2-5d9c7663383c is in state STARTED 2026-01-17 01:06:57.404755 | orchestrator | 2026-01-17 01:06:57 | INFO  | Task 945b4aeb-9733-435c-98f8-dedd4708352b is in state STARTED 2026-01-17 01:06:57.405664 | orchestrator | 2026-01-17 01:06:57 | INFO  | Task 7c0cfeff-5f53-4e00-8a2f-3700890c45c2 is in state STARTED 2026-01-17 01:06:57.407115 | orchestrator | 2026-01-17 01:06:57 | INFO  | Task 5909bcd5-75ba-42bb-9574-2889166ac98d is in state STARTED 2026-01-17 01:06:57.407134 | orchestrator | 2026-01-17 01:06:57 | INFO  | Wait 1 second(s) until the next check 2026-01-17 01:07:00.440786 | orchestrator | 2026-01-17 01:07:00 | INFO  | Task e6c45b1b-c67c-45dd-bbc2-5d9c7663383c is in state STARTED 2026-01-17 01:07:00.444579 | orchestrator | 2026-01-17 01:07:00 | INFO  | Task 945b4aeb-9733-435c-98f8-dedd4708352b is in state STARTED 2026-01-17 01:07:00.446692 | orchestrator | 2026-01-17 01:07:00 | INFO  | Task 7c0cfeff-5f53-4e00-8a2f-3700890c45c2 is in state STARTED 2026-01-17 01:07:00.448054 | orchestrator | 2026-01-17 01:07:00 | INFO  | Task 5909bcd5-75ba-42bb-9574-2889166ac98d is in state STARTED 2026-01-17 01:07:00.448088 | orchestrator | 2026-01-17 01:07:00 | INFO  | Wait 1 second(s) until the next check 2026-01-17 01:07:03.488037 | orchestrator | 2026-01-17 01:07:03 | INFO  | Task e6c45b1b-c67c-45dd-bbc2-5d9c7663383c is in state STARTED 2026-01-17 01:07:03.488147 | orchestrator | 2026-01-17 01:07:03 | INFO  | Task 945b4aeb-9733-435c-98f8-dedd4708352b is in state STARTED 2026-01-17 01:07:03.488311 | orchestrator | 2026-01-17 01:07:03 | INFO  | Task 7c0cfeff-5f53-4e00-8a2f-3700890c45c2 is in state STARTED 2026-01-17 01:07:03.488861 | orchestrator | 2026-01-17 01:07:03 | INFO  | Task 5909bcd5-75ba-42bb-9574-2889166ac98d is in state STARTED 2026-01-17 01:07:03.488877 | orchestrator | 2026-01-17 01:07:03 | INFO  | Wait 1 second(s) until the next check 2026-01-17 01:07:06.532548 | orchestrator | 2026-01-17 01:07:06 | INFO  | Task e6c45b1b-c67c-45dd-bbc2-5d9c7663383c is in state STARTED 2026-01-17 01:07:06.534553 | orchestrator | 2026-01-17 01:07:06 | INFO  | Task 945b4aeb-9733-435c-98f8-dedd4708352b is in state STARTED 2026-01-17 01:07:06.536673 | orchestrator | 2026-01-17 01:07:06 | INFO  | Task 7c0cfeff-5f53-4e00-8a2f-3700890c45c2 is in state STARTED 2026-01-17 01:07:06.538904 | orchestrator | 2026-01-17 01:07:06 | INFO  | Task 5909bcd5-75ba-42bb-9574-2889166ac98d is in state STARTED 2026-01-17 01:07:06.538978 | orchestrator | 2026-01-17 01:07:06 | INFO  | Wait 1 second(s) until the next check 2026-01-17 01:07:09.591302 | orchestrator | 2026-01-17 01:07:09 | INFO  | Task e6c45b1b-c67c-45dd-bbc2-5d9c7663383c is in state STARTED 2026-01-17 01:07:09.592051 | orchestrator | 2026-01-17 01:07:09 | INFO  | Task 945b4aeb-9733-435c-98f8-dedd4708352b is in state STARTED 2026-01-17 01:07:09.593732 | orchestrator | 2026-01-17 01:07:09 | INFO  | Task 7c0cfeff-5f53-4e00-8a2f-3700890c45c2 is in state STARTED 2026-01-17 01:07:09.595314 | orchestrator | 2026-01-17 01:07:09 | INFO  | Task 5909bcd5-75ba-42bb-9574-2889166ac98d is in state STARTED 2026-01-17 01:07:09.595618 | orchestrator | 2026-01-17 01:07:09 | INFO  | Wait 1 second(s) until the next check 2026-01-17 01:07:12.626116 | orchestrator | 2026-01-17 01:07:12 | INFO  | Task e6c45b1b-c67c-45dd-bbc2-5d9c7663383c is in state STARTED 2026-01-17 01:07:12.626884 | orchestrator | 2026-01-17 01:07:12 | INFO  | Task 945b4aeb-9733-435c-98f8-dedd4708352b is in state STARTED 2026-01-17 01:07:12.627753 | orchestrator | 2026-01-17 01:07:12 | INFO  | Task 7c0cfeff-5f53-4e00-8a2f-3700890c45c2 is in state STARTED 2026-01-17 01:07:12.628802 | orchestrator | 2026-01-17 01:07:12 | INFO  | Task 5909bcd5-75ba-42bb-9574-2889166ac98d is in state STARTED 2026-01-17 01:07:12.628825 | orchestrator | 2026-01-17 01:07:12 | INFO  | Wait 1 second(s) until the next check 2026-01-17 01:07:15.677744 | orchestrator | 2026-01-17 01:07:15 | INFO  | Task e6c45b1b-c67c-45dd-bbc2-5d9c7663383c is in state STARTED 2026-01-17 01:07:15.678572 | orchestrator | 2026-01-17 01:07:15 | INFO  | Task 945b4aeb-9733-435c-98f8-dedd4708352b is in state STARTED 2026-01-17 01:07:15.679491 | orchestrator | 2026-01-17 01:07:15 | INFO  | Task 7c0cfeff-5f53-4e00-8a2f-3700890c45c2 is in state STARTED 2026-01-17 01:07:15.680574 | orchestrator | 2026-01-17 01:07:15 | INFO  | Task 5909bcd5-75ba-42bb-9574-2889166ac98d is in state STARTED 2026-01-17 01:07:15.680816 | orchestrator | 2026-01-17 01:07:15 | INFO  | Wait 1 second(s) until the next check 2026-01-17 01:07:18.704748 | orchestrator | 2026-01-17 01:07:18.704789 | orchestrator | 2026-01-17 01:07:18.704794 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-17 01:07:18.704797 | orchestrator | 2026-01-17 01:07:18.704801 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-01-17 01:07:18.704804 | orchestrator | Saturday 17 January 2026 01:05:51 +0000 (0:00:00.189) 0:00:00.189 ****** 2026-01-17 01:07:18.704808 | orchestrator | ok: [testbed-node-0] 2026-01-17 01:07:18.704812 | orchestrator | ok: [testbed-node-1] 2026-01-17 01:07:18.704815 | orchestrator | ok: [testbed-node-2] 2026-01-17 01:07:18.704818 | orchestrator | 2026-01-17 01:07:18.704821 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-17 01:07:18.704825 | orchestrator | Saturday 17 January 2026 01:05:52 +0000 (0:00:00.337) 0:00:00.526 ****** 2026-01-17 01:07:18.704828 | orchestrator | ok: [testbed-node-0] => (item=enable_keystone_True) 2026-01-17 01:07:18.704843 | orchestrator | ok: [testbed-node-1] => (item=enable_keystone_True) 2026-01-17 01:07:18.704846 | orchestrator | ok: [testbed-node-2] => (item=enable_keystone_True) 2026-01-17 01:07:18.704849 | orchestrator | 2026-01-17 01:07:18.704852 | orchestrator | PLAY [Wait for the Keystone service] ******************************************* 2026-01-17 01:07:18.704856 | orchestrator | 2026-01-17 01:07:18.704859 | orchestrator | TASK [Waiting for Keystone public port to be UP] ******************************* 2026-01-17 01:07:18.704862 | orchestrator | Saturday 17 January 2026 01:05:52 +0000 (0:00:00.685) 0:00:01.212 ****** 2026-01-17 01:07:18.704865 | orchestrator | ok: [testbed-node-1] 2026-01-17 01:07:18.704868 | orchestrator | ok: [testbed-node-0] 2026-01-17 01:07:18.704871 | orchestrator | ok: [testbed-node-2] 2026-01-17 01:07:18.704875 | orchestrator | 2026-01-17 01:07:18.704878 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-17 01:07:18.704881 | orchestrator | testbed-node-0 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-17 01:07:18.704886 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-17 01:07:18.704889 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-17 01:07:18.704892 | orchestrator | 2026-01-17 01:07:18.704895 | orchestrator | 2026-01-17 01:07:18.704898 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-17 01:07:18.704901 | orchestrator | Saturday 17 January 2026 01:05:53 +0000 (0:00:00.829) 0:00:02.042 ****** 2026-01-17 01:07:18.704904 | orchestrator | =============================================================================== 2026-01-17 01:07:18.704907 | orchestrator | Waiting for Keystone public port to be UP ------------------------------- 0.83s 2026-01-17 01:07:18.704910 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.69s 2026-01-17 01:07:18.704914 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.34s 2026-01-17 01:07:18.704917 | orchestrator | 2026-01-17 01:07:18.704920 | orchestrator | 2026-01-17 01:07:18.704929 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-17 01:07:18.704933 | orchestrator | 2026-01-17 01:07:18.704936 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-01-17 01:07:18.704939 | orchestrator | Saturday 17 January 2026 01:05:58 +0000 (0:00:00.299) 0:00:00.299 ****** 2026-01-17 01:07:18.704942 | orchestrator | ok: [testbed-manager] 2026-01-17 01:07:18.704945 | orchestrator | ok: [testbed-node-0] 2026-01-17 01:07:18.704948 | orchestrator | ok: [testbed-node-1] 2026-01-17 01:07:18.704951 | orchestrator | ok: [testbed-node-2] 2026-01-17 01:07:18.704954 | orchestrator | ok: [testbed-node-3] 2026-01-17 01:07:18.704957 | orchestrator | ok: [testbed-node-4] 2026-01-17 01:07:18.704961 | orchestrator | ok: [testbed-node-5] 2026-01-17 01:07:18.704964 | orchestrator | 2026-01-17 01:07:18.704967 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-17 01:07:18.704970 | orchestrator | Saturday 17 January 2026 01:05:59 +0000 (0:00:00.890) 0:00:01.189 ****** 2026-01-17 01:07:18.704974 | orchestrator | ok: [testbed-manager] => (item=enable_ceph_rgw_True) 2026-01-17 01:07:18.704977 | orchestrator | ok: [testbed-node-0] => (item=enable_ceph_rgw_True) 2026-01-17 01:07:18.704980 | orchestrator | ok: [testbed-node-1] => (item=enable_ceph_rgw_True) 2026-01-17 01:07:18.704983 | orchestrator | ok: [testbed-node-2] => (item=enable_ceph_rgw_True) 2026-01-17 01:07:18.704986 | orchestrator | ok: [testbed-node-3] => (item=enable_ceph_rgw_True) 2026-01-17 01:07:18.704989 | orchestrator | ok: [testbed-node-4] => (item=enable_ceph_rgw_True) 2026-01-17 01:07:18.704993 | orchestrator | ok: [testbed-node-5] => (item=enable_ceph_rgw_True) 2026-01-17 01:07:18.704996 | orchestrator | 2026-01-17 01:07:18.704999 | orchestrator | PLAY [Apply role ceph-rgw] ***************************************************** 2026-01-17 01:07:18.705002 | orchestrator | 2026-01-17 01:07:18.705005 | orchestrator | TASK [ceph-rgw : include_tasks] ************************************************ 2026-01-17 01:07:18.705011 | orchestrator | Saturday 17 January 2026 01:06:00 +0000 (0:00:00.741) 0:00:01.930 ****** 2026-01-17 01:07:18.705014 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/deploy.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-01-17 01:07:18.705018 | orchestrator | 2026-01-17 01:07:18.705021 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating services] ********************** 2026-01-17 01:07:18.705024 | orchestrator | Saturday 17 January 2026 01:06:03 +0000 (0:00:02.608) 0:00:04.539 ****** 2026-01-17 01:07:18.705027 | orchestrator | changed: [testbed-manager] => (item=swift (object-store)) 2026-01-17 01:07:18.705031 | orchestrator | 2026-01-17 01:07:18.705034 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating endpoints] ********************* 2026-01-17 01:07:18.705037 | orchestrator | Saturday 17 January 2026 01:06:07 +0000 (0:00:04.168) 0:00:08.707 ****** 2026-01-17 01:07:18.705040 | orchestrator | changed: [testbed-manager] => (item=swift -> https://api-int.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s -> internal) 2026-01-17 01:07:18.705050 | orchestrator | changed: [testbed-manager] => (item=swift -> https://api.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s -> public) 2026-01-17 01:07:18.705053 | orchestrator | 2026-01-17 01:07:18.705056 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating projects] ********************** 2026-01-17 01:07:18.705059 | orchestrator | Saturday 17 January 2026 01:06:14 +0000 (0:00:07.034) 0:00:15.742 ****** 2026-01-17 01:07:18.705062 | orchestrator | ok: [testbed-manager] => (item=service) 2026-01-17 01:07:18.705066 | orchestrator | 2026-01-17 01:07:18.705069 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating users] ************************* 2026-01-17 01:07:18.705072 | orchestrator | Saturday 17 January 2026 01:06:17 +0000 (0:00:03.625) 0:00:19.368 ****** 2026-01-17 01:07:18.705075 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-01-17 01:07:18.705078 | orchestrator | changed: [testbed-manager] => (item=ceph_rgw -> service) 2026-01-17 01:07:18.705081 | orchestrator | 2026-01-17 01:07:18.705084 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating roles] ************************* 2026-01-17 01:07:18.705087 | orchestrator | Saturday 17 January 2026 01:06:21 +0000 (0:00:03.548) 0:00:22.916 ****** 2026-01-17 01:07:18.705090 | orchestrator | ok: [testbed-manager] => (item=admin) 2026-01-17 01:07:18.705093 | orchestrator | changed: [testbed-manager] => (item=ResellerAdmin) 2026-01-17 01:07:18.705096 | orchestrator | 2026-01-17 01:07:18.705099 | orchestrator | TASK [service-ks-register : ceph-rgw | Granting user roles] ******************** 2026-01-17 01:07:18.705102 | orchestrator | Saturday 17 January 2026 01:06:28 +0000 (0:00:06.671) 0:00:29.587 ****** 2026-01-17 01:07:18.705106 | orchestrator | changed: [testbed-manager] => (item=ceph_rgw -> service -> admin) 2026-01-17 01:07:18.705109 | orchestrator | 2026-01-17 01:07:18.705112 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-17 01:07:18.705115 | orchestrator | testbed-manager : ok=9  changed=5  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-17 01:07:18.705118 | orchestrator | testbed-node-0 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-17 01:07:18.705121 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-17 01:07:18.705124 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-17 01:07:18.705127 | orchestrator | testbed-node-3 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-17 01:07:18.705132 | orchestrator | testbed-node-4 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-17 01:07:18.705135 | orchestrator | testbed-node-5 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-17 01:07:18.705141 | orchestrator | 2026-01-17 01:07:18.705144 | orchestrator | 2026-01-17 01:07:18.705147 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-17 01:07:18.705150 | orchestrator | Saturday 17 January 2026 01:06:33 +0000 (0:00:05.738) 0:00:35.326 ****** 2026-01-17 01:07:18.705153 | orchestrator | =============================================================================== 2026-01-17 01:07:18.705156 | orchestrator | service-ks-register : ceph-rgw | Creating endpoints --------------------- 7.03s 2026-01-17 01:07:18.705159 | orchestrator | service-ks-register : ceph-rgw | Creating roles ------------------------- 6.67s 2026-01-17 01:07:18.705162 | orchestrator | service-ks-register : ceph-rgw | Granting user roles -------------------- 5.74s 2026-01-17 01:07:18.705241 | orchestrator | service-ks-register : ceph-rgw | Creating services ---------------------- 4.17s 2026-01-17 01:07:18.705247 | orchestrator | service-ks-register : ceph-rgw | Creating projects ---------------------- 3.63s 2026-01-17 01:07:18.705250 | orchestrator | service-ks-register : ceph-rgw | Creating users ------------------------- 3.55s 2026-01-17 01:07:18.705253 | orchestrator | ceph-rgw : include_tasks ------------------------------------------------ 2.61s 2026-01-17 01:07:18.705256 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.89s 2026-01-17 01:07:18.705259 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.74s 2026-01-17 01:07:18.705262 | orchestrator | 2026-01-17 01:07:18.705266 | orchestrator | 2026-01-17 01:07:18 | INFO  | Task e6c45b1b-c67c-45dd-bbc2-5d9c7663383c is in state SUCCESS 2026-01-17 01:07:18.706486 | orchestrator | 2026-01-17 01:07:18.706519 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-17 01:07:18.706527 | orchestrator | 2026-01-17 01:07:18.706533 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-01-17 01:07:18.706538 | orchestrator | Saturday 17 January 2026 01:05:11 +0000 (0:00:00.470) 0:00:00.470 ****** 2026-01-17 01:07:18.706544 | orchestrator | ok: [testbed-node-0] 2026-01-17 01:07:18.706550 | orchestrator | ok: [testbed-node-1] 2026-01-17 01:07:18.706555 | orchestrator | ok: [testbed-node-2] 2026-01-17 01:07:18.706560 | orchestrator | 2026-01-17 01:07:18.706565 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-17 01:07:18.706571 | orchestrator | Saturday 17 January 2026 01:05:12 +0000 (0:00:00.396) 0:00:00.867 ****** 2026-01-17 01:07:18.706576 | orchestrator | ok: [testbed-node-0] => (item=enable_magnum_True) 2026-01-17 01:07:18.706582 | orchestrator | ok: [testbed-node-1] => (item=enable_magnum_True) 2026-01-17 01:07:18.706587 | orchestrator | ok: [testbed-node-2] => (item=enable_magnum_True) 2026-01-17 01:07:18.706593 | orchestrator | 2026-01-17 01:07:18.706598 | orchestrator | PLAY [Apply role magnum] ******************************************************* 2026-01-17 01:07:18.706603 | orchestrator | 2026-01-17 01:07:18.706608 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2026-01-17 01:07:18.706614 | orchestrator | Saturday 17 January 2026 01:05:12 +0000 (0:00:00.405) 0:00:01.272 ****** 2026-01-17 01:07:18.706619 | orchestrator | included: /ansible/roles/magnum/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-17 01:07:18.706624 | orchestrator | 2026-01-17 01:07:18.706629 | orchestrator | TASK [service-ks-register : magnum | Creating services] ************************ 2026-01-17 01:07:18.706635 | orchestrator | Saturday 17 January 2026 01:05:13 +0000 (0:00:01.012) 0:00:02.285 ****** 2026-01-17 01:07:18.706640 | orchestrator | changed: [testbed-node-0] => (item=magnum (container-infra)) 2026-01-17 01:07:18.706645 | orchestrator | 2026-01-17 01:07:18.706651 | orchestrator | TASK [service-ks-register : magnum | Creating endpoints] *********************** 2026-01-17 01:07:18.706656 | orchestrator | Saturday 17 January 2026 01:05:17 +0000 (0:00:03.569) 0:00:05.854 ****** 2026-01-17 01:07:18.706661 | orchestrator | changed: [testbed-node-0] => (item=magnum -> https://api-int.testbed.osism.xyz:9511/v1 -> internal) 2026-01-17 01:07:18.706667 | orchestrator | changed: [testbed-node-0] => (item=magnum -> https://api.testbed.osism.xyz:9511/v1 -> public) 2026-01-17 01:07:18.706682 | orchestrator | 2026-01-17 01:07:18.706687 | orchestrator | TASK [service-ks-register : magnum | Creating projects] ************************ 2026-01-17 01:07:18.706692 | orchestrator | Saturday 17 January 2026 01:05:24 +0000 (0:00:07.209) 0:00:13.064 ****** 2026-01-17 01:07:18.706698 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-01-17 01:07:18.706703 | orchestrator | 2026-01-17 01:07:18.706708 | orchestrator | TASK [service-ks-register : magnum | Creating users] *************************** 2026-01-17 01:07:18.706713 | orchestrator | Saturday 17 January 2026 01:05:27 +0000 (0:00:03.518) 0:00:16.583 ****** 2026-01-17 01:07:18.706768 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-01-17 01:07:18.706774 | orchestrator | changed: [testbed-node-0] => (item=magnum -> service) 2026-01-17 01:07:18.706779 | orchestrator | 2026-01-17 01:07:18.706783 | orchestrator | TASK [service-ks-register : magnum | Creating roles] *************************** 2026-01-17 01:07:18.706789 | orchestrator | Saturday 17 January 2026 01:05:32 +0000 (0:00:04.306) 0:00:20.889 ****** 2026-01-17 01:07:18.706794 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-01-17 01:07:18.706799 | orchestrator | 2026-01-17 01:07:18.706804 | orchestrator | TASK [service-ks-register : magnum | Granting user roles] ********************** 2026-01-17 01:07:18.706810 | orchestrator | Saturday 17 January 2026 01:05:35 +0000 (0:00:03.466) 0:00:24.355 ****** 2026-01-17 01:07:18.706816 | orchestrator | changed: [testbed-node-0] => (item=magnum -> service -> admin) 2026-01-17 01:07:18.706821 | orchestrator | 2026-01-17 01:07:18.706826 | orchestrator | TASK [magnum : Creating Magnum trustee domain] ********************************* 2026-01-17 01:07:18.706831 | orchestrator | Saturday 17 January 2026 01:05:39 +0000 (0:00:03.753) 0:00:28.109 ****** 2026-01-17 01:07:18.706836 | orchestrator | changed: [testbed-node-0] 2026-01-17 01:07:18.706842 | orchestrator | 2026-01-17 01:07:18.706851 | orchestrator | TASK [magnum : Creating Magnum trustee user] *********************************** 2026-01-17 01:07:18.706856 | orchestrator | Saturday 17 January 2026 01:05:43 +0000 (0:00:03.844) 0:00:31.954 ****** 2026-01-17 01:07:18.706861 | orchestrator | changed: [testbed-node-0] 2026-01-17 01:07:18.706866 | orchestrator | 2026-01-17 01:07:18.706871 | orchestrator | TASK [magnum : Creating Magnum trustee user role] ****************************** 2026-01-17 01:07:18.706877 | orchestrator | Saturday 17 January 2026 01:05:47 +0000 (0:00:04.115) 0:00:36.070 ****** 2026-01-17 01:07:18.706882 | orchestrator | changed: [testbed-node-0] 2026-01-17 01:07:18.706887 | orchestrator | 2026-01-17 01:07:18.706893 | orchestrator | TASK [magnum : Ensuring config directories exist] ****************************** 2026-01-17 01:07:18.706898 | orchestrator | Saturday 17 January 2026 01:05:51 +0000 (0:00:03.818) 0:00:39.888 ****** 2026-01-17 01:07:18.706914 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-01-17 01:07:18.706921 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-01-17 01:07:18.706932 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-01-17 01:07:18.706938 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-01-17 01:07:18.706946 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-01-17 01:07:18.706954 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-01-17 01:07:18.706959 | orchestrator | 2026-01-17 01:07:18.706965 | orchestrator | TASK [magnum : Check if policies shall be overwritten] ************************* 2026-01-17 01:07:18.706970 | orchestrator | Saturday 17 January 2026 01:05:52 +0000 (0:00:01.527) 0:00:41.415 ****** 2026-01-17 01:07:18.706976 | orchestrator | skipping: [testbed-node-0] 2026-01-17 01:07:18.706981 | orchestrator | 2026-01-17 01:07:18.706986 | orchestrator | TASK [magnum : Set magnum policy file] ***************************************** 2026-01-17 01:07:18.706995 | orchestrator | Saturday 17 January 2026 01:05:52 +0000 (0:00:00.127) 0:00:41.543 ****** 2026-01-17 01:07:18.707000 | orchestrator | skipping: [testbed-node-0] 2026-01-17 01:07:18.707005 | orchestrator | skipping: [testbed-node-1] 2026-01-17 01:07:18.707010 | orchestrator | skipping: [testbed-node-2] 2026-01-17 01:07:18.707015 | orchestrator | 2026-01-17 01:07:18.707020 | orchestrator | TASK [magnum : Check if kubeconfig file is supplied] *************************** 2026-01-17 01:07:18.707025 | orchestrator | Saturday 17 January 2026 01:05:53 +0000 (0:00:00.576) 0:00:42.119 ****** 2026-01-17 01:07:18.707030 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-01-17 01:07:18.707035 | orchestrator | 2026-01-17 01:07:18.707041 | orchestrator | TASK [magnum : Copying over kubeconfig file] *********************************** 2026-01-17 01:07:18.707046 | orchestrator | Saturday 17 January 2026 01:05:54 +0000 (0:00:00.929) 0:00:43.049 ****** 2026-01-17 01:07:18.707051 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-01-17 01:07:18.707057 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-01-17 01:07:18.707065 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-01-17 01:07:18.707076 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-01-17 01:07:18.707085 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-01-17 01:07:18.707090 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-01-17 01:07:18.707096 | orchestrator | 2026-01-17 01:07:18.707101 | orchestrator | TASK [magnum : Set magnum kubeconfig file's path] ****************************** 2026-01-17 01:07:18.707106 | orchestrator | Saturday 17 January 2026 01:05:56 +0000 (0:00:02.523) 0:00:45.572 ****** 2026-01-17 01:07:18.707111 | orchestrator | ok: [testbed-node-0] 2026-01-17 01:07:18.707116 | orchestrator | ok: [testbed-node-1] 2026-01-17 01:07:18.707122 | orchestrator | ok: [testbed-node-2] 2026-01-17 01:07:18.707127 | orchestrator | 2026-01-17 01:07:18.707132 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2026-01-17 01:07:18.707137 | orchestrator | Saturday 17 January 2026 01:05:57 +0000 (0:00:00.351) 0:00:45.924 ****** 2026-01-17 01:07:18.707143 | orchestrator | included: /ansible/roles/magnum/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-17 01:07:18.707148 | orchestrator | 2026-01-17 01:07:18.707153 | orchestrator | TASK [service-cert-copy : magnum | Copying over extra CA certificates] ********* 2026-01-17 01:07:18.707159 | orchestrator | Saturday 17 January 2026 01:05:58 +0000 (0:00:00.878) 0:00:46.803 ****** 2026-01-17 01:07:18.707215 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-01-17 01:07:18.707227 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-01-17 01:07:18.707236 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-01-17 01:07:18.707242 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-01-17 01:07:18.707247 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-01-17 01:07:18.707257 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-01-17 01:07:18.707262 | orchestrator | 2026-01-17 01:07:18.707267 | orchestrator | TASK [service-cert-copy : magnum | Copying over backend internal TLS certificate] *** 2026-01-17 01:07:18.707275 | orchestrator | Saturday 17 January 2026 01:06:01 +0000 (0:00:03.108) 0:00:49.911 ****** 2026-01-17 01:07:18.707284 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-01-17 01:07:18.707290 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-01-17 01:07:18.707295 | orchestrator | skipping: [testbed-node-0] 2026-01-17 01:07:18.707301 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-01-17 01:07:18.707307 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-01-17 01:07:18.707312 | orchestrator | skipping: [testbed-node-1] 2026-01-17 01:07:18.707327 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-01-17 01:07:18.707338 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-01-17 01:07:18.707344 | orchestrator | skipping: [testbed-node-2] 2026-01-17 01:07:18.707349 | orchestrator | 2026-01-17 01:07:18.707354 | orchestrator | TASK [service-cert-copy : magnum | Copying over backend internal TLS key] ****** 2026-01-17 01:07:18.707359 | orchestrator | Saturday 17 January 2026 01:06:02 +0000 (0:00:01.361) 0:00:51.273 ****** 2026-01-17 01:07:18.707408 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-01-17 01:07:18.707416 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-01-17 01:07:18.707421 | orchestrator | skipping: [testbed-node-0] 2026-01-17 01:07:18.707429 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-01-17 01:07:18.707439 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-01-17 01:07:18.707444 | orchestrator | skipping: [testbed-node-1] 2026-01-17 01:07:18.707454 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-01-17 01:07:18.707460 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-01-17 01:07:18.707465 | orchestrator | skipping: [testbed-node-2] 2026-01-17 01:07:18.707471 | orchestrator | 2026-01-17 01:07:18.707476 | orchestrator | TASK [magnum : Copying over config.json files for services] ******************** 2026-01-17 01:07:18.707482 | orchestrator | Saturday 17 January 2026 01:06:04 +0000 (0:00:01.449) 0:00:52.722 ****** 2026-01-17 01:07:18.707487 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-01-17 01:07:18.707495 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-01-17 01:07:18.707506 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-01-17 01:07:18.707511 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-01-17 01:07:18.707523 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-01-17 01:07:18.707534 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-01-17 01:07:18.707539 | orchestrator | 2026-01-17 01:07:18.707545 | orchestrator | TASK [magnum : Copying over magnum.conf] *************************************** 2026-01-17 01:07:18.707553 | orchestrator | Saturday 17 January 2026 01:06:07 +0000 (0:00:03.460) 0:00:56.183 ****** 2026-01-17 01:07:18.707595 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-01-17 01:07:18.707606 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-01-17 01:07:18.707611 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-01-17 01:07:18.707616 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-01-17 01:07:18.707622 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-01-17 01:07:18.707634 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-01-17 01:07:18.707640 | orchestrator | 2026-01-17 01:07:18.707645 | orchestrator | TASK [magnum : Copying over existing policy file] ****************************** 2026-01-17 01:07:18.707650 | orchestrator | Saturday 17 January 2026 01:06:17 +0000 (0:00:09.907) 0:01:06.090 ****** 2026-01-17 01:07:18.707659 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-01-17 01:07:18.707664 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-01-17 01:07:18.707670 | orchestrator | skipping: [testbed-node-0] 2026-01-17 01:07:18.707675 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-01-17 01:07:18.707681 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-01-17 01:07:18.707691 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-01-17 01:07:18.707696 | orchestrator | skipping: [testbed-node-2] 2026-01-17 01:07:18.707704 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-01-17 01:07:18.707710 | orchestrator | skipping: [testbed-node-1] 2026-01-17 01:07:18.707716 | orchestrator | 2026-01-17 01:07:18.707721 | orchestrator | TASK [magnum : Check magnum containers] **************************************** 2026-01-17 01:07:18.707726 | orchestrator | Saturday 17 January 2026 01:06:17 +0000 (0:00:00.493) 0:01:06.584 ****** 2026-01-17 01:07:18.707732 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-01-17 01:07:18.707737 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-01-17 01:07:18.707750 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-01-17 01:07:18.707756 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-01-17 01:07:18.707765 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-01-17 01:07:18.707770 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-01-17 01:07:18.707776 | orchestrator | 2026-01-17 01:07:18.707781 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2026-01-17 01:07:18.707786 | orchestrator | Saturday 17 January 2026 01:06:21 +0000 (0:00:03.106) 0:01:09.690 ****** 2026-01-17 01:07:18.707792 | orchestrator | skipping: [testbed-node-0] 2026-01-17 01:07:18.707797 | orchestrator | skipping: [testbed-node-1] 2026-01-17 01:07:18.707806 | orchestrator | skipping: [testbed-node-2] 2026-01-17 01:07:18.707811 | orchestrator | 2026-01-17 01:07:18.707816 | orchestrator | TASK [magnum : Creating Magnum database] *************************************** 2026-01-17 01:07:18.707821 | orchestrator | Saturday 17 January 2026 01:06:21 +0000 (0:00:00.303) 0:01:09.994 ****** 2026-01-17 01:07:18.707827 | orchestrator | changed: [testbed-node-0] 2026-01-17 01:07:18.707832 | orchestrator | 2026-01-17 01:07:18.707837 | orchestrator | TASK [magnum : Creating Magnum database user and setting permissions] ********** 2026-01-17 01:07:18.707842 | orchestrator | Saturday 17 January 2026 01:06:23 +0000 (0:00:02.057) 0:01:12.051 ****** 2026-01-17 01:07:18.707847 | orchestrator | changed: [testbed-node-0] 2026-01-17 01:07:18.707852 | orchestrator | 2026-01-17 01:07:18.707857 | orchestrator | TASK [magnum : Running Magnum bootstrap container] ***************************** 2026-01-17 01:07:18.707863 | orchestrator | Saturday 17 January 2026 01:06:25 +0000 (0:00:02.505) 0:01:14.557 ****** 2026-01-17 01:07:18.707868 | orchestrator | changed: [testbed-node-0] 2026-01-17 01:07:18.707874 | orchestrator | 2026-01-17 01:07:18.707879 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2026-01-17 01:07:18.707884 | orchestrator | Saturday 17 January 2026 01:06:42 +0000 (0:00:16.821) 0:01:31.378 ****** 2026-01-17 01:07:18.707889 | orchestrator | 2026-01-17 01:07:18.707894 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2026-01-17 01:07:18.707900 | orchestrator | Saturday 17 January 2026 01:06:42 +0000 (0:00:00.106) 0:01:31.485 ****** 2026-01-17 01:07:18.707905 | orchestrator | 2026-01-17 01:07:18.707910 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2026-01-17 01:07:18.707915 | orchestrator | Saturday 17 January 2026 01:06:42 +0000 (0:00:00.135) 0:01:31.620 ****** 2026-01-17 01:07:18.707920 | orchestrator | 2026-01-17 01:07:18.707925 | orchestrator | RUNNING HANDLER [magnum : Restart magnum-api container] ************************ 2026-01-17 01:07:18.707930 | orchestrator | Saturday 17 January 2026 01:06:43 +0000 (0:00:00.137) 0:01:31.758 ****** 2026-01-17 01:07:18.707935 | orchestrator | changed: [testbed-node-0] 2026-01-17 01:07:18.707941 | orchestrator | changed: [testbed-node-2] 2026-01-17 01:07:18.707946 | orchestrator | changed: [testbed-node-1] 2026-01-17 01:07:18.707951 | orchestrator | 2026-01-17 01:07:18.707959 | orchestrator | RUNNING HANDLER [magnum : Restart magnum-conductor container] ****************** 2026-01-17 01:07:18.707964 | orchestrator | Saturday 17 January 2026 01:07:01 +0000 (0:00:18.650) 0:01:50.408 ****** 2026-01-17 01:07:18.707969 | orchestrator | changed: [testbed-node-0] 2026-01-17 01:07:18.707974 | orchestrator | changed: [testbed-node-2] 2026-01-17 01:07:18.707979 | orchestrator | changed: [testbed-node-1] 2026-01-17 01:07:18.707984 | orchestrator | 2026-01-17 01:07:18.707989 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-17 01:07:18.707994 | orchestrator | testbed-node-0 : ok=26  changed=18  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-01-17 01:07:18.708000 | orchestrator | testbed-node-1 : ok=13  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-01-17 01:07:18.708005 | orchestrator | testbed-node-2 : ok=13  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-01-17 01:07:18.708009 | orchestrator | 2026-01-17 01:07:18.708015 | orchestrator | 2026-01-17 01:07:18.708020 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-17 01:07:18.708025 | orchestrator | Saturday 17 January 2026 01:07:14 +0000 (0:00:13.236) 0:02:03.645 ****** 2026-01-17 01:07:18.708030 | orchestrator | =============================================================================== 2026-01-17 01:07:18.708035 | orchestrator | magnum : Restart magnum-api container ---------------------------------- 18.65s 2026-01-17 01:07:18.708043 | orchestrator | magnum : Running Magnum bootstrap container ---------------------------- 16.82s 2026-01-17 01:07:18.708049 | orchestrator | magnum : Restart magnum-conductor container ---------------------------- 13.24s 2026-01-17 01:07:18.708059 | orchestrator | magnum : Copying over magnum.conf --------------------------------------- 9.91s 2026-01-17 01:07:18.708064 | orchestrator | service-ks-register : magnum | Creating endpoints ----------------------- 7.21s 2026-01-17 01:07:18.708069 | orchestrator | service-ks-register : magnum | Creating users --------------------------- 4.31s 2026-01-17 01:07:18.708074 | orchestrator | magnum : Creating Magnum trustee user ----------------------------------- 4.12s 2026-01-17 01:07:18.708079 | orchestrator | magnum : Creating Magnum trustee domain --------------------------------- 3.84s 2026-01-17 01:07:18.708084 | orchestrator | magnum : Creating Magnum trustee user role ------------------------------ 3.82s 2026-01-17 01:07:18.708090 | orchestrator | service-ks-register : magnum | Granting user roles ---------------------- 3.75s 2026-01-17 01:07:18.708095 | orchestrator | service-ks-register : magnum | Creating services ------------------------ 3.57s 2026-01-17 01:07:18.708100 | orchestrator | service-ks-register : magnum | Creating projects ------------------------ 3.52s 2026-01-17 01:07:18.708105 | orchestrator | service-ks-register : magnum | Creating roles --------------------------- 3.47s 2026-01-17 01:07:18.708110 | orchestrator | magnum : Copying over config.json files for services -------------------- 3.46s 2026-01-17 01:07:18.708115 | orchestrator | service-cert-copy : magnum | Copying over extra CA certificates --------- 3.11s 2026-01-17 01:07:18.708120 | orchestrator | magnum : Check magnum containers ---------------------------------------- 3.11s 2026-01-17 01:07:18.708125 | orchestrator | magnum : Copying over kubeconfig file ----------------------------------- 2.52s 2026-01-17 01:07:18.708130 | orchestrator | magnum : Creating Magnum database user and setting permissions ---------- 2.51s 2026-01-17 01:07:18.708135 | orchestrator | magnum : Creating Magnum database --------------------------------------- 2.06s 2026-01-17 01:07:18.708141 | orchestrator | magnum : Ensuring config directories exist ------------------------------ 1.53s 2026-01-17 01:07:18.708146 | orchestrator | 2026-01-17 01:07:18 | INFO  | Task 945b4aeb-9733-435c-98f8-dedd4708352b is in state STARTED 2026-01-17 01:07:18.708151 | orchestrator | 2026-01-17 01:07:18 | INFO  | Task 7c0cfeff-5f53-4e00-8a2f-3700890c45c2 is in state STARTED 2026-01-17 01:07:18.708156 | orchestrator | 2026-01-17 01:07:18 | INFO  | Task 6314a902-a206-4763-9570-548a4193bb76 is in state STARTED 2026-01-17 01:07:18.708162 | orchestrator | 2026-01-17 01:07:18 | INFO  | Task 5909bcd5-75ba-42bb-9574-2889166ac98d is in state STARTED 2026-01-17 01:07:18.708181 | orchestrator | 2026-01-17 01:07:18 | INFO  | Wait 1 second(s) until the next check 2026-01-17 01:07:21.747064 | orchestrator | 2026-01-17 01:07:21 | INFO  | Task 945b4aeb-9733-435c-98f8-dedd4708352b is in state STARTED 2026-01-17 01:07:21.747479 | orchestrator | 2026-01-17 01:07:21 | INFO  | Task 7c0cfeff-5f53-4e00-8a2f-3700890c45c2 is in state STARTED 2026-01-17 01:07:21.748257 | orchestrator | 2026-01-17 01:07:21 | INFO  | Task 6314a902-a206-4763-9570-548a4193bb76 is in state STARTED 2026-01-17 01:07:21.748993 | orchestrator | 2026-01-17 01:07:21 | INFO  | Task 5909bcd5-75ba-42bb-9574-2889166ac98d is in state STARTED 2026-01-17 01:07:21.749022 | orchestrator | 2026-01-17 01:07:21 | INFO  | Wait 1 second(s) until the next check 2026-01-17 01:07:24.784811 | orchestrator | 2026-01-17 01:07:24 | INFO  | Task 945b4aeb-9733-435c-98f8-dedd4708352b is in state STARTED 2026-01-17 01:07:24.785344 | orchestrator | 2026-01-17 01:07:24 | INFO  | Task 7c0cfeff-5f53-4e00-8a2f-3700890c45c2 is in state STARTED 2026-01-17 01:07:24.786115 | orchestrator | 2026-01-17 01:07:24 | INFO  | Task 6314a902-a206-4763-9570-548a4193bb76 is in state STARTED 2026-01-17 01:07:24.786940 | orchestrator | 2026-01-17 01:07:24 | INFO  | Task 5909bcd5-75ba-42bb-9574-2889166ac98d is in state STARTED 2026-01-17 01:07:24.786976 | orchestrator | 2026-01-17 01:07:24 | INFO  | Wait 1 second(s) until the next check 2026-01-17 01:07:27.827845 | orchestrator | 2026-01-17 01:07:27 | INFO  | Task 945b4aeb-9733-435c-98f8-dedd4708352b is in state STARTED 2026-01-17 01:07:27.828568 | orchestrator | 2026-01-17 01:07:27 | INFO  | Task 7c0cfeff-5f53-4e00-8a2f-3700890c45c2 is in state STARTED 2026-01-17 01:07:27.829496 | orchestrator | 2026-01-17 01:07:27 | INFO  | Task 6314a902-a206-4763-9570-548a4193bb76 is in state STARTED 2026-01-17 01:07:27.830316 | orchestrator | 2026-01-17 01:07:27 | INFO  | Task 5909bcd5-75ba-42bb-9574-2889166ac98d is in state STARTED 2026-01-17 01:07:27.830455 | orchestrator | 2026-01-17 01:07:27 | INFO  | Wait 1 second(s) until the next check 2026-01-17 01:07:30.857829 | orchestrator | 2026-01-17 01:07:30 | INFO  | Task 945b4aeb-9733-435c-98f8-dedd4708352b is in state STARTED 2026-01-17 01:07:30.860548 | orchestrator | 2026-01-17 01:07:30 | INFO  | Task 7c0cfeff-5f53-4e00-8a2f-3700890c45c2 is in state SUCCESS 2026-01-17 01:07:30.861864 | orchestrator | 2026-01-17 01:07:30.861930 | orchestrator | 2026-01-17 01:07:30.861939 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-17 01:07:30.861948 | orchestrator | 2026-01-17 01:07:30.861955 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-01-17 01:07:30.861961 | orchestrator | Saturday 17 January 2026 01:02:46 +0000 (0:00:00.291) 0:00:00.291 ****** 2026-01-17 01:07:30.861970 | orchestrator | ok: [testbed-node-0] 2026-01-17 01:07:30.861978 | orchestrator | ok: [testbed-node-1] 2026-01-17 01:07:30.861986 | orchestrator | ok: [testbed-node-2] 2026-01-17 01:07:30.861993 | orchestrator | ok: [testbed-node-3] 2026-01-17 01:07:30.862000 | orchestrator | ok: [testbed-node-4] 2026-01-17 01:07:30.862008 | orchestrator | ok: [testbed-node-5] 2026-01-17 01:07:30.862048 | orchestrator | 2026-01-17 01:07:30.862056 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-17 01:07:30.862063 | orchestrator | Saturday 17 January 2026 01:02:47 +0000 (0:00:00.847) 0:00:01.138 ****** 2026-01-17 01:07:30.862069 | orchestrator | ok: [testbed-node-0] => (item=enable_neutron_True) 2026-01-17 01:07:30.862076 | orchestrator | ok: [testbed-node-1] => (item=enable_neutron_True) 2026-01-17 01:07:30.862082 | orchestrator | ok: [testbed-node-2] => (item=enable_neutron_True) 2026-01-17 01:07:30.862089 | orchestrator | ok: [testbed-node-3] => (item=enable_neutron_True) 2026-01-17 01:07:30.862096 | orchestrator | ok: [testbed-node-4] => (item=enable_neutron_True) 2026-01-17 01:07:30.862103 | orchestrator | ok: [testbed-node-5] => (item=enable_neutron_True) 2026-01-17 01:07:30.862110 | orchestrator | 2026-01-17 01:07:30.862117 | orchestrator | PLAY [Apply role neutron] ****************************************************** 2026-01-17 01:07:30.862124 | orchestrator | 2026-01-17 01:07:30.862131 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2026-01-17 01:07:30.862139 | orchestrator | Saturday 17 January 2026 01:02:47 +0000 (0:00:00.775) 0:00:01.914 ****** 2026-01-17 01:07:30.862252 | orchestrator | included: /ansible/roles/neutron/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-01-17 01:07:30.862263 | orchestrator | 2026-01-17 01:07:30.862486 | orchestrator | TASK [neutron : Get container facts] ******************************************* 2026-01-17 01:07:30.862498 | orchestrator | Saturday 17 January 2026 01:02:48 +0000 (0:00:01.081) 0:00:02.995 ****** 2026-01-17 01:07:30.862505 | orchestrator | ok: [testbed-node-1] 2026-01-17 01:07:30.862512 | orchestrator | ok: [testbed-node-0] 2026-01-17 01:07:30.862519 | orchestrator | ok: [testbed-node-2] 2026-01-17 01:07:30.862526 | orchestrator | ok: [testbed-node-3] 2026-01-17 01:07:30.862533 | orchestrator | ok: [testbed-node-4] 2026-01-17 01:07:30.862540 | orchestrator | ok: [testbed-node-5] 2026-01-17 01:07:30.862546 | orchestrator | 2026-01-17 01:07:30.862553 | orchestrator | TASK [neutron : Get container volume facts] ************************************ 2026-01-17 01:07:30.862559 | orchestrator | Saturday 17 January 2026 01:02:50 +0000 (0:00:01.237) 0:00:04.233 ****** 2026-01-17 01:07:30.862566 | orchestrator | ok: [testbed-node-1] 2026-01-17 01:07:30.862572 | orchestrator | ok: [testbed-node-0] 2026-01-17 01:07:30.862601 | orchestrator | ok: [testbed-node-2] 2026-01-17 01:07:30.862608 | orchestrator | ok: [testbed-node-3] 2026-01-17 01:07:30.862615 | orchestrator | ok: [testbed-node-4] 2026-01-17 01:07:30.862621 | orchestrator | ok: [testbed-node-5] 2026-01-17 01:07:30.862628 | orchestrator | 2026-01-17 01:07:30.862633 | orchestrator | TASK [neutron : Check for ML2/OVN presence] ************************************ 2026-01-17 01:07:30.862639 | orchestrator | Saturday 17 January 2026 01:02:51 +0000 (0:00:01.087) 0:00:05.320 ****** 2026-01-17 01:07:30.862645 | orchestrator | ok: [testbed-node-0] => { 2026-01-17 01:07:30.862652 | orchestrator |  "changed": false, 2026-01-17 01:07:30.862658 | orchestrator |  "msg": "All assertions passed" 2026-01-17 01:07:30.862664 | orchestrator | } 2026-01-17 01:07:30.862670 | orchestrator | ok: [testbed-node-1] => { 2026-01-17 01:07:30.862676 | orchestrator |  "changed": false, 2026-01-17 01:07:30.862683 | orchestrator |  "msg": "All assertions passed" 2026-01-17 01:07:30.862689 | orchestrator | } 2026-01-17 01:07:30.862695 | orchestrator | ok: [testbed-node-2] => { 2026-01-17 01:07:30.862701 | orchestrator |  "changed": false, 2026-01-17 01:07:30.862709 | orchestrator |  "msg": "All assertions passed" 2026-01-17 01:07:30.862715 | orchestrator | } 2026-01-17 01:07:30.862722 | orchestrator | ok: [testbed-node-3] => { 2026-01-17 01:07:30.862762 | orchestrator |  "changed": false, 2026-01-17 01:07:30.862924 | orchestrator |  "msg": "All assertions passed" 2026-01-17 01:07:30.862935 | orchestrator | } 2026-01-17 01:07:30.862942 | orchestrator | ok: [testbed-node-4] => { 2026-01-17 01:07:30.862948 | orchestrator |  "changed": false, 2026-01-17 01:07:30.862955 | orchestrator |  "msg": "All assertions passed" 2026-01-17 01:07:30.862961 | orchestrator | } 2026-01-17 01:07:30.862967 | orchestrator | ok: [testbed-node-5] => { 2026-01-17 01:07:30.862987 | orchestrator |  "changed": false, 2026-01-17 01:07:30.862995 | orchestrator |  "msg": "All assertions passed" 2026-01-17 01:07:30.863001 | orchestrator | } 2026-01-17 01:07:30.863008 | orchestrator | 2026-01-17 01:07:30.863016 | orchestrator | TASK [neutron : Check for ML2/OVS presence] ************************************ 2026-01-17 01:07:30.863024 | orchestrator | Saturday 17 January 2026 01:02:52 +0000 (0:00:00.888) 0:00:06.209 ****** 2026-01-17 01:07:30.863030 | orchestrator | skipping: [testbed-node-0] 2026-01-17 01:07:30.863037 | orchestrator | skipping: [testbed-node-1] 2026-01-17 01:07:30.863043 | orchestrator | skipping: [testbed-node-2] 2026-01-17 01:07:30.863050 | orchestrator | skipping: [testbed-node-3] 2026-01-17 01:07:30.863058 | orchestrator | skipping: [testbed-node-4] 2026-01-17 01:07:30.863064 | orchestrator | skipping: [testbed-node-5] 2026-01-17 01:07:30.863071 | orchestrator | 2026-01-17 01:07:30.863078 | orchestrator | TASK [service-ks-register : neutron | Creating services] *********************** 2026-01-17 01:07:30.863084 | orchestrator | Saturday 17 January 2026 01:02:52 +0000 (0:00:00.758) 0:00:06.967 ****** 2026-01-17 01:07:30.863091 | orchestrator | changed: [testbed-node-0] => (item=neutron (network)) 2026-01-17 01:07:30.863098 | orchestrator | 2026-01-17 01:07:30.863104 | orchestrator | TASK [service-ks-register : neutron | Creating endpoints] ********************** 2026-01-17 01:07:30.863111 | orchestrator | Saturday 17 January 2026 01:02:56 +0000 (0:00:03.754) 0:00:10.722 ****** 2026-01-17 01:07:30.863117 | orchestrator | changed: [testbed-node-0] => (item=neutron -> https://api-int.testbed.osism.xyz:9696 -> internal) 2026-01-17 01:07:30.863125 | orchestrator | changed: [testbed-node-0] => (item=neutron -> https://api.testbed.osism.xyz:9696 -> public) 2026-01-17 01:07:30.863132 | orchestrator | 2026-01-17 01:07:30.863217 | orchestrator | TASK [service-ks-register : neutron | Creating projects] *********************** 2026-01-17 01:07:30.863229 | orchestrator | Saturday 17 January 2026 01:03:03 +0000 (0:00:07.052) 0:00:17.775 ****** 2026-01-17 01:07:30.863236 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-01-17 01:07:30.863243 | orchestrator | 2026-01-17 01:07:30.863250 | orchestrator | TASK [service-ks-register : neutron | Creating users] ************************** 2026-01-17 01:07:30.863257 | orchestrator | Saturday 17 January 2026 01:03:07 +0000 (0:00:03.748) 0:00:21.523 ****** 2026-01-17 01:07:30.863277 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-01-17 01:07:30.863284 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service) 2026-01-17 01:07:30.863290 | orchestrator | 2026-01-17 01:07:30.863297 | orchestrator | TASK [service-ks-register : neutron | Creating roles] ************************** 2026-01-17 01:07:30.863304 | orchestrator | Saturday 17 January 2026 01:03:11 +0000 (0:00:04.202) 0:00:25.726 ****** 2026-01-17 01:07:30.863311 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-01-17 01:07:30.863318 | orchestrator | 2026-01-17 01:07:30.863324 | orchestrator | TASK [service-ks-register : neutron | Granting user roles] ********************* 2026-01-17 01:07:30.863331 | orchestrator | Saturday 17 January 2026 01:03:15 +0000 (0:00:03.408) 0:00:29.135 ****** 2026-01-17 01:07:30.863338 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service -> admin) 2026-01-17 01:07:30.863344 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service -> service) 2026-01-17 01:07:30.863351 | orchestrator | 2026-01-17 01:07:30.863358 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2026-01-17 01:07:30.863365 | orchestrator | Saturday 17 January 2026 01:03:21 +0000 (0:00:06.832) 0:00:35.967 ****** 2026-01-17 01:07:30.863372 | orchestrator | skipping: [testbed-node-0] 2026-01-17 01:07:30.863379 | orchestrator | skipping: [testbed-node-1] 2026-01-17 01:07:30.863386 | orchestrator | skipping: [testbed-node-2] 2026-01-17 01:07:30.863393 | orchestrator | skipping: [testbed-node-3] 2026-01-17 01:07:30.863399 | orchestrator | skipping: [testbed-node-4] 2026-01-17 01:07:30.863405 | orchestrator | skipping: [testbed-node-5] 2026-01-17 01:07:30.863411 | orchestrator | 2026-01-17 01:07:30.863417 | orchestrator | TASK [Load and persist kernel modules] ***************************************** 2026-01-17 01:07:30.863423 | orchestrator | Saturday 17 January 2026 01:03:22 +0000 (0:00:00.778) 0:00:36.746 ****** 2026-01-17 01:07:30.863429 | orchestrator | skipping: [testbed-node-0] 2026-01-17 01:07:30.863435 | orchestrator | skipping: [testbed-node-2] 2026-01-17 01:07:30.863440 | orchestrator | skipping: [testbed-node-4] 2026-01-17 01:07:30.863447 | orchestrator | skipping: [testbed-node-1] 2026-01-17 01:07:30.863454 | orchestrator | skipping: [testbed-node-3] 2026-01-17 01:07:30.863460 | orchestrator | skipping: [testbed-node-5] 2026-01-17 01:07:30.863467 | orchestrator | 2026-01-17 01:07:30.863473 | orchestrator | TASK [neutron : Check IPv6 support] ******************************************** 2026-01-17 01:07:30.863479 | orchestrator | Saturday 17 January 2026 01:03:25 +0000 (0:00:02.413) 0:00:39.160 ****** 2026-01-17 01:07:30.863486 | orchestrator | ok: [testbed-node-0] 2026-01-17 01:07:30.863492 | orchestrator | ok: [testbed-node-1] 2026-01-17 01:07:30.863499 | orchestrator | ok: [testbed-node-2] 2026-01-17 01:07:30.863505 | orchestrator | ok: [testbed-node-4] 2026-01-17 01:07:30.863512 | orchestrator | ok: [testbed-node-3] 2026-01-17 01:07:30.863519 | orchestrator | ok: [testbed-node-5] 2026-01-17 01:07:30.863526 | orchestrator | 2026-01-17 01:07:30.863533 | orchestrator | TASK [Setting sysctl values] *************************************************** 2026-01-17 01:07:30.863540 | orchestrator | Saturday 17 January 2026 01:03:26 +0000 (0:00:01.298) 0:00:40.459 ****** 2026-01-17 01:07:30.863547 | orchestrator | skipping: [testbed-node-0] 2026-01-17 01:07:30.863555 | orchestrator | skipping: [testbed-node-1] 2026-01-17 01:07:30.863563 | orchestrator | skipping: [testbed-node-2] 2026-01-17 01:07:30.863570 | orchestrator | skipping: [testbed-node-3] 2026-01-17 01:07:30.863578 | orchestrator | skipping: [testbed-node-4] 2026-01-17 01:07:30.863588 | orchestrator | skipping: [testbed-node-5] 2026-01-17 01:07:30.863596 | orchestrator | 2026-01-17 01:07:30.863603 | orchestrator | TASK [neutron : Ensuring config directories exist] ***************************** 2026-01-17 01:07:30.863609 | orchestrator | Saturday 17 January 2026 01:03:30 +0000 (0:00:03.907) 0:00:44.366 ****** 2026-01-17 01:07:30.863628 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-01-17 01:07:30.863685 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-01-17 01:07:30.863697 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-01-17 01:07:30.863706 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-01-17 01:07:30.863715 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-01-17 01:07:30.863727 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-01-17 01:07:30.863746 | orchestrator | 2026-01-17 01:07:30.863754 | orchestrator | TASK [neutron : Check if extra ml2 plugins exists] ***************************** 2026-01-17 01:07:30.863762 | orchestrator | Saturday 17 January 2026 01:03:33 +0000 (0:00:03.143) 0:00:47.510 ****** 2026-01-17 01:07:30.863769 | orchestrator | [WARNING]: Skipped 2026-01-17 01:07:30.863778 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/neutron/plugins/' path 2026-01-17 01:07:30.863786 | orchestrator | due to this access issue: 2026-01-17 01:07:30.863910 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/neutron/plugins/' is not 2026-01-17 01:07:30.863927 | orchestrator | a directory 2026-01-17 01:07:30.863935 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-01-17 01:07:30.863942 | orchestrator | 2026-01-17 01:07:30.863948 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2026-01-17 01:07:30.863977 | orchestrator | Saturday 17 January 2026 01:03:34 +0000 (0:00:00.780) 0:00:48.291 ****** 2026-01-17 01:07:30.863988 | orchestrator | included: /ansible/roles/neutron/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-01-17 01:07:30.863995 | orchestrator | 2026-01-17 01:07:30.864002 | orchestrator | TASK [service-cert-copy : neutron | Copying over extra CA certificates] ******** 2026-01-17 01:07:30.864009 | orchestrator | Saturday 17 January 2026 01:03:35 +0000 (0:00:01.176) 0:00:49.467 ****** 2026-01-17 01:07:30.864018 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-01-17 01:07:30.864027 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-01-17 01:07:30.864035 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-01-17 01:07:30.864055 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-01-17 01:07:30.864087 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-01-17 01:07:30.864097 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-01-17 01:07:30.864104 | orchestrator | 2026-01-17 01:07:30.864111 | orchestrator | TASK [service-cert-copy : neutron | Copying over backend internal TLS certificate] *** 2026-01-17 01:07:30.864118 | orchestrator | Saturday 17 January 2026 01:03:39 +0000 (0:00:04.514) 0:00:53.982 ****** 2026-01-17 01:07:30.864126 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-01-17 01:07:30.864139 | orchestrator | skipping: [testbed-node-0] 2026-01-17 01:07:30.864166 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-01-17 01:07:30.864175 | orchestrator | skipping: [testbed-node-2] 2026-01-17 01:07:30.864182 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-01-17 01:07:30.864213 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-01-17 01:07:30.864222 | orchestrator | skipping: [testbed-node-1] 2026-01-17 01:07:30.864228 | orchestrator | skipping: [testbed-node-4] 2026-01-17 01:07:30.864235 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-01-17 01:07:30.864242 | orchestrator | skipping: [testbed-node-5] 2026-01-17 01:07:30.864249 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-01-17 01:07:30.864262 | orchestrator | skipping: [testbed-node-3] 2026-01-17 01:07:30.864270 | orchestrator | 2026-01-17 01:07:30.864277 | orchestrator | TASK [service-cert-copy : neutron | Copying over backend internal TLS key] ***** 2026-01-17 01:07:30.864284 | orchestrator | Saturday 17 January 2026 01:03:43 +0000 (0:00:03.607) 0:00:57.589 ****** 2026-01-17 01:07:30.864294 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-01-17 01:07:30.864301 | orchestrator | skipping: [testbed-node-4] 2026-01-17 01:07:30.864332 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-01-17 01:07:30.864341 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-01-17 01:07:30.864348 | orchestrator | skipping: [testbed-node-1] 2026-01-17 01:07:30.864355 | orchestrator | skipping: [testbed-node-5] 2026-01-17 01:07:30.864362 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-01-17 01:07:30.864374 | orchestrator | skipping: [testbed-node-3] 2026-01-17 01:07:30.864382 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-01-17 01:07:30.864389 | orchestrator | skipping: [testbed-node-2] 2026-01-17 01:07:30.864405 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-01-17 01:07:30.864415 | orchestrator | skipping: [testbed-node-0] 2026-01-17 01:07:30.864423 | orchestrator | 2026-01-17 01:07:30.864430 | orchestrator | TASK [neutron : Creating TLS backend PEM File] ********************************* 2026-01-17 01:07:30.864437 | orchestrator | Saturday 17 January 2026 01:03:46 +0000 (0:00:02.965) 0:01:00.555 ****** 2026-01-17 01:07:30.864444 | orchestrator | skipping: [testbed-node-1] 2026-01-17 01:07:30.864450 | orchestrator | skipping: [testbed-node-2] 2026-01-17 01:07:30.864456 | orchestrator | skipping: [testbed-node-0] 2026-01-17 01:07:30.864463 | orchestrator | skipping: [testbed-node-3] 2026-01-17 01:07:30.864469 | orchestrator | skipping: [testbed-node-4] 2026-01-17 01:07:30.864475 | orchestrator | skipping: [testbed-node-5] 2026-01-17 01:07:30.864482 | orchestrator | 2026-01-17 01:07:30.864488 | orchestrator | TASK [neutron : Check if policies shall be overwritten] ************************ 2026-01-17 01:07:30.864498 | orchestrator | Saturday 17 January 2026 01:03:49 +0000 (0:00:02.729) 0:01:03.285 ****** 2026-01-17 01:07:30.864505 | orchestrator | skipping: [testbed-node-0] 2026-01-17 01:07:30.864511 | orchestrator | 2026-01-17 01:07:30.864517 | orchestrator | TASK [neutron : Set neutron policy file] *************************************** 2026-01-17 01:07:30.864524 | orchestrator | Saturday 17 January 2026 01:03:49 +0000 (0:00:00.120) 0:01:03.406 ****** 2026-01-17 01:07:30.864530 | orchestrator | skipping: [testbed-node-0] 2026-01-17 01:07:30.864536 | orchestrator | skipping: [testbed-node-1] 2026-01-17 01:07:30.864543 | orchestrator | skipping: [testbed-node-2] 2026-01-17 01:07:30.864549 | orchestrator | skipping: [testbed-node-3] 2026-01-17 01:07:30.864555 | orchestrator | skipping: [testbed-node-4] 2026-01-17 01:07:30.864562 | orchestrator | skipping: [testbed-node-5] 2026-01-17 01:07:30.864568 | orchestrator | 2026-01-17 01:07:30.864574 | orchestrator | TASK [neutron : Copying over existing policy file] ***************************** 2026-01-17 01:07:30.864581 | orchestrator | Saturday 17 January 2026 01:03:50 +0000 (0:00:00.799) 0:01:04.205 ****** 2026-01-17 01:07:30.864593 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-01-17 01:07:30.864600 | orchestrator | skipping: [testbed-node-0] 2026-01-17 01:07:30.864606 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-01-17 01:07:30.864613 | orchestrator | skipping: [testbed-node-2] 2026-01-17 01:07:30.864620 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-01-17 01:07:30.864626 | orchestrator | skipping: [testbed-node-4] 2026-01-17 01:07:30.864668 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-01-17 01:07:30.864677 | orchestrator | skipping: [testbed-node-1] 2026-01-17 01:07:30.864684 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-01-17 01:07:30.864695 | orchestrator | skipping: [testbed-node-5] 2026-01-17 01:07:30.864703 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-01-17 01:07:30.864710 | orchestrator | skipping: [testbed-node-3] 2026-01-17 01:07:30.864716 | orchestrator | 2026-01-17 01:07:30.864723 | orchestrator | TASK [neutron : Copying over config.json files for services] ******************* 2026-01-17 01:07:30.864730 | orchestrator | Saturday 17 January 2026 01:03:53 +0000 (0:00:03.655) 0:01:07.861 ****** 2026-01-17 01:07:30.864738 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-01-17 01:07:30.864748 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-01-17 01:07:30.864762 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-01-17 01:07:30.864776 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-01-17 01:07:30.864784 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-01-17 01:07:30.864791 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-01-17 01:07:30.864798 | orchestrator | 2026-01-17 01:07:30.864805 | orchestrator | TASK [neutron : Copying over neutron.conf] ************************************* 2026-01-17 01:07:30.864812 | orchestrator | Saturday 17 January 2026 01:03:59 +0000 (0:00:05.696) 0:01:13.557 ****** 2026-01-17 01:07:30.864822 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-01-17 01:07:30.864836 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-01-17 01:07:30.864848 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-01-17 01:07:30.864855 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-01-17 01:07:30.864863 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-01-17 01:07:30.864874 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-01-17 01:07:30.864882 | orchestrator | 2026-01-17 01:07:30.864889 | orchestrator | TASK [neutron : Copying over neutron_vpnaas.conf] ****************************** 2026-01-17 01:07:30.864900 | orchestrator | Saturday 17 January 2026 01:04:08 +0000 (0:00:08.809) 0:01:22.367 ****** 2026-01-17 01:07:30.864911 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-01-17 01:07:30.864917 | orchestrator | skipping: [testbed-node-0] 2026-01-17 01:07:30.864925 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-01-17 01:07:30.864932 | orchestrator | skipping: [testbed-node-1] 2026-01-17 01:07:30.864939 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-01-17 01:07:30.864945 | orchestrator | skipping: [testbed-node-3] 2026-01-17 01:07:30.864954 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-01-17 01:07:30.864961 | orchestrator | skipping: [testbed-node-2] 2026-01-17 01:07:30.864968 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-01-17 01:07:30.864981 | orchestrator | skipping: [testbed-node-5] 2026-01-17 01:07:30.864993 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-01-17 01:07:30.865001 | orchestrator | skipping: [testbed-node-4] 2026-01-17 01:07:30.865007 | orchestrator | 2026-01-17 01:07:30.865014 | orchestrator | TASK [neutron : Copying over ssh key] ****************************************** 2026-01-17 01:07:30.865022 | orchestrator | Saturday 17 January 2026 01:04:11 +0000 (0:00:02.971) 0:01:25.339 ****** 2026-01-17 01:07:30.865029 | orchestrator | skipping: [testbed-node-3] 2026-01-17 01:07:30.865037 | orchestrator | skipping: [testbed-node-5] 2026-01-17 01:07:30.865044 | orchestrator | skipping: [testbed-node-4] 2026-01-17 01:07:30.865051 | orchestrator | changed: [testbed-node-1] 2026-01-17 01:07:30.865059 | orchestrator | changed: [testbed-node-0] 2026-01-17 01:07:30.865066 | orchestrator | changed: [testbed-node-2] 2026-01-17 01:07:30.865073 | orchestrator | 2026-01-17 01:07:30.865080 | orchestrator | TASK [neutron : Copying over ml2_conf.ini] ************************************* 2026-01-17 01:07:30.865087 | orchestrator | Saturday 17 January 2026 01:04:13 +0000 (0:00:02.691) 0:01:28.031 ****** 2026-01-17 01:07:30.865094 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-01-17 01:07:30.865101 | orchestrator | skipping: [testbed-node-5] 2026-01-17 01:07:30.865108 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-01-17 01:07:30.865115 | orchestrator | skipping: [testbed-node-4] 2026-01-17 01:07:30.865126 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-01-17 01:07:30.865139 | orchestrator | skipping: [testbed-node-3] 2026-01-17 01:07:30.865192 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-01-17 01:07:30.865202 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-01-17 01:07:30.865210 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-01-17 01:07:30.865217 | orchestrator | 2026-01-17 01:07:30.865224 | orchestrator | TASK [neutron : Copying over linuxbridge_agent.ini] **************************** 2026-01-17 01:07:30.865231 | orchestrator | Saturday 17 January 2026 01:04:18 +0000 (0:00:04.378) 0:01:32.409 ****** 2026-01-17 01:07:30.865239 | orchestrator | skipping: [testbed-node-0] 2026-01-17 01:07:30.865246 | orchestrator | skipping: [testbed-node-2] 2026-01-17 01:07:30.865253 | orchestrator | skipping: [testbed-node-3] 2026-01-17 01:07:30.865260 | orchestrator | skipping: [testbed-node-1] 2026-01-17 01:07:30.865273 | orchestrator | skipping: [testbed-node-4] 2026-01-17 01:07:30.865280 | orchestrator | skipping: [testbed-node-5] 2026-01-17 01:07:30.865287 | orchestrator | 2026-01-17 01:07:30.865295 | orchestrator | TASK [neutron : Copying over openvswitch_agent.ini] **************************** 2026-01-17 01:07:30.865302 | orchestrator | Saturday 17 January 2026 01:04:20 +0000 (0:00:02.599) 0:01:35.008 ****** 2026-01-17 01:07:30.865309 | orchestrator | skipping: [testbed-node-0] 2026-01-17 01:07:30.865316 | orchestrator | skipping: [testbed-node-1] 2026-01-17 01:07:30.865324 | orchestrator | skipping: [testbed-node-2] 2026-01-17 01:07:30.865331 | orchestrator | skipping: [testbed-node-4] 2026-01-17 01:07:30.865338 | orchestrator | skipping: [testbed-node-3] 2026-01-17 01:07:30.865349 | orchestrator | skipping: [testbed-node-5] 2026-01-17 01:07:30.865355 | orchestrator | 2026-01-17 01:07:30.865361 | orchestrator | TASK [neutron : Copying over sriov_agent.ini] ********************************** 2026-01-17 01:07:30.865368 | orchestrator | Saturday 17 January 2026 01:04:23 +0000 (0:00:02.814) 0:01:37.823 ****** 2026-01-17 01:07:30.865375 | orchestrator | skipping: [testbed-node-2] 2026-01-17 01:07:30.865382 | orchestrator | skipping: [testbed-node-1] 2026-01-17 01:07:30.865390 | orchestrator | skipping: [testbed-node-0] 2026-01-17 01:07:30.865397 | orchestrator | skipping: [testbed-node-5] 2026-01-17 01:07:30.865404 | orchestrator | skipping: [testbed-node-3] 2026-01-17 01:07:30.865410 | orchestrator | skipping: [testbed-node-4] 2026-01-17 01:07:30.865418 | orchestrator | 2026-01-17 01:07:30.865425 | orchestrator | TASK [neutron : Copying over mlnx_agent.ini] *********************************** 2026-01-17 01:07:30.865432 | orchestrator | Saturday 17 January 2026 01:04:25 +0000 (0:00:02.062) 0:01:39.885 ****** 2026-01-17 01:07:30.865439 | orchestrator | skipping: [testbed-node-2] 2026-01-17 01:07:30.865447 | orchestrator | skipping: [testbed-node-0] 2026-01-17 01:07:30.865454 | orchestrator | skipping: [testbed-node-1] 2026-01-17 01:07:30.865461 | orchestrator | skipping: [testbed-node-5] 2026-01-17 01:07:30.865469 | orchestrator | skipping: [testbed-node-3] 2026-01-17 01:07:30.865476 | orchestrator | skipping: [testbed-node-4] 2026-01-17 01:07:30.865483 | orchestrator | 2026-01-17 01:07:30.865490 | orchestrator | TASK [neutron : Copying over eswitchd.conf] ************************************ 2026-01-17 01:07:30.865497 | orchestrator | Saturday 17 January 2026 01:04:28 +0000 (0:00:02.722) 0:01:42.607 ****** 2026-01-17 01:07:30.865504 | orchestrator | skipping: [testbed-node-0] 2026-01-17 01:07:30.865511 | orchestrator | skipping: [testbed-node-4] 2026-01-17 01:07:30.865518 | orchestrator | skipping: [testbed-node-2] 2026-01-17 01:07:30.865525 | orchestrator | skipping: [testbed-node-1] 2026-01-17 01:07:30.865537 | orchestrator | skipping: [testbed-node-3] 2026-01-17 01:07:30.865545 | orchestrator | skipping: [testbed-node-5] 2026-01-17 01:07:30.865553 | orchestrator | 2026-01-17 01:07:30.865560 | orchestrator | TASK [neutron : Copying over dhcp_agent.ini] *********************************** 2026-01-17 01:07:30.865567 | orchestrator | Saturday 17 January 2026 01:04:32 +0000 (0:00:03.698) 0:01:46.306 ****** 2026-01-17 01:07:30.865575 | orchestrator | skipping: [testbed-node-0] 2026-01-17 01:07:30.865582 | orchestrator | skipping: [testbed-node-1] 2026-01-17 01:07:30.865589 | orchestrator | skipping: [testbed-node-4] 2026-01-17 01:07:30.865595 | orchestrator | skipping: [testbed-node-2] 2026-01-17 01:07:30.865600 | orchestrator | skipping: [testbed-node-5] 2026-01-17 01:07:30.865606 | orchestrator | skipping: [testbed-node-3] 2026-01-17 01:07:30.865614 | orchestrator | 2026-01-17 01:07:30.865621 | orchestrator | TASK [neutron : Copying over dnsmasq.conf] ************************************* 2026-01-17 01:07:30.865629 | orchestrator | Saturday 17 January 2026 01:04:34 +0000 (0:00:02.360) 0:01:48.666 ****** 2026-01-17 01:07:30.865635 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-01-17 01:07:30.865644 | orchestrator | skipping: [testbed-node-0] 2026-01-17 01:07:30.865651 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-01-17 01:07:30.865658 | orchestrator | skipping: [testbed-node-1] 2026-01-17 01:07:30.865664 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-01-17 01:07:30.865677 | orchestrator | skipping: [testbed-node-2] 2026-01-17 01:07:30.865684 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-01-17 01:07:30.865691 | orchestrator | skipping: [testbed-node-3] 2026-01-17 01:07:30.865699 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-01-17 01:07:30.865706 | orchestrator | skipping: [testbed-node-4] 2026-01-17 01:07:30.865713 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-01-17 01:07:30.865720 | orchestrator | skipping: [testbed-node-5] 2026-01-17 01:07:30.865727 | orchestrator | 2026-01-17 01:07:30.865734 | orchestrator | TASK [neutron : Copying over l3_agent.ini] ************************************* 2026-01-17 01:07:30.865741 | orchestrator | Saturday 17 January 2026 01:04:36 +0000 (0:00:01.961) 0:01:50.628 ****** 2026-01-17 01:07:30.865749 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-01-17 01:07:30.865761 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-01-17 01:07:30.865769 | orchestrator | skipping: [testbed-node-1] 2026-01-17 01:07:30.865776 | orchestrator | skipping: [testbed-node-0] 2026-01-17 01:07:30.865789 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-01-17 01:07:30.865796 | orchestrator | skipping: [testbed-node-2] 2026-01-17 01:07:30.865804 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-01-17 01:07:30.865816 | orchestrator | skipping: [testbed-node-3] 2026-01-17 01:07:30.865824 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-01-17 01:07:30.865831 | orchestrator | skipping: [testbed-node-4] 2026-01-17 01:07:30.865839 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-01-17 01:07:30.865847 | orchestrator | skipping: [testbed-node-5] 2026-01-17 01:07:30.865854 | orchestrator | 2026-01-17 01:07:30.865862 | orchestrator | TASK [neutron : Copying over fwaas_driver.ini] ********************************* 2026-01-17 01:07:30.865868 | orchestrator | Saturday 17 January 2026 01:04:38 +0000 (0:00:02.252) 0:01:52.880 ****** 2026-01-17 01:07:30.865879 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-01-17 01:07:30.865887 | orchestrator | skipping: [testbed-node-0] 2026-01-17 01:07:30.865900 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-01-17 01:07:30.865926 | orchestrator | skipping: [testbed-node-1] 2026-01-17 01:07:30.865933 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-01-17 01:07:30.865940 | orchestrator | skipping: [testbed-node-2] 2026-01-17 01:07:30.865947 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-01-17 01:07:30.865953 | orchestrator | skipping: [testbed-node-3] 2026-01-17 01:07:30.865962 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-01-17 01:07:30.865968 | orchestrator | skipping: [testbed-node-4] 2026-01-17 01:07:30.865975 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-01-17 01:07:30.865982 | orchestrator | skipping: [testbed-node-5] 2026-01-17 01:07:30.865989 | orchestrator | 2026-01-17 01:07:30.865996 | orchestrator | TASK [neutron : Copying over metadata_agent.ini] ******************************* 2026-01-17 01:07:30.866011 | orchestrator | Saturday 17 January 2026 01:04:40 +0000 (0:00:02.152) 0:01:55.033 ****** 2026-01-17 01:07:30.866064 | orchestrator | skipping: [testbed-node-0] 2026-01-17 01:07:30.866076 | orchestrator | skipping: [testbed-node-1] 2026-01-17 01:07:30.866084 | orchestrator | skipping: [testbed-node-2] 2026-01-17 01:07:30.866091 | orchestrator | skipping: [testbed-node-5] 2026-01-17 01:07:30.866098 | orchestrator | skipping: [testbed-node-3] 2026-01-17 01:07:30.866105 | orchestrator | skipping: [testbed-node-4] 2026-01-17 01:07:30.866112 | orchestrator | 2026-01-17 01:07:30.866119 | orchestrator | TASK [neutron : Copying over neutron_ovn_metadata_agent.ini] ******************* 2026-01-17 01:07:30.866126 | orchestrator | Saturday 17 January 2026 01:04:43 +0000 (0:00:02.998) 0:01:58.031 ****** 2026-01-17 01:07:30.866131 | orchestrator | skipping: [testbed-node-0] 2026-01-17 01:07:30.866137 | orchestrator | skipping: [testbed-node-1] 2026-01-17 01:07:30.866142 | orchestrator | skipping: [testbed-node-2] 2026-01-17 01:07:30.866160 | orchestrator | changed: [testbed-node-3] 2026-01-17 01:07:30.866166 | orchestrator | changed: [testbed-node-4] 2026-01-17 01:07:30.866173 | orchestrator | changed: [testbed-node-5] 2026-01-17 01:07:30.866179 | orchestrator | 2026-01-17 01:07:30.866184 | orchestrator | TASK [neutron : Copying over metering_agent.ini] ******************************* 2026-01-17 01:07:30.866189 | orchestrator | Saturday 17 January 2026 01:04:47 +0000 (0:00:03.973) 0:02:02.005 ****** 2026-01-17 01:07:30.866195 | orchestrator | skipping: [testbed-node-0] 2026-01-17 01:07:30.866200 | orchestrator | skipping: [testbed-node-3] 2026-01-17 01:07:30.866206 | orchestrator | skipping: [testbed-node-2] 2026-01-17 01:07:30.866211 | orchestrator | skipping: [testbed-node-1] 2026-01-17 01:07:30.866216 | orchestrator | skipping: [testbed-node-4] 2026-01-17 01:07:30.866222 | orchestrator | skipping: [testbed-node-5] 2026-01-17 01:07:30.866227 | orchestrator | 2026-01-17 01:07:30.866234 | orchestrator | TASK [neutron : Copying over ironic_neutron_agent.ini] ************************* 2026-01-17 01:07:30.866242 | orchestrator | Saturday 17 January 2026 01:04:51 +0000 (0:00:03.170) 0:02:05.176 ****** 2026-01-17 01:07:30.866248 | orchestrator | skipping: [testbed-node-0] 2026-01-17 01:07:30.866255 | orchestrator | skipping: [testbed-node-1] 2026-01-17 01:07:30.866263 | orchestrator | skipping: [testbed-node-2] 2026-01-17 01:07:30.866270 | orchestrator | skipping: [testbed-node-4] 2026-01-17 01:07:30.866277 | orchestrator | skipping: [testbed-node-5] 2026-01-17 01:07:30.866285 | orchestrator | skipping: [testbed-node-3] 2026-01-17 01:07:30.866292 | orchestrator | 2026-01-17 01:07:30.866299 | orchestrator | TASK [neutron : Copying over bgp_dragent.ini] ********************************** 2026-01-17 01:07:30.866306 | orchestrator | Saturday 17 January 2026 01:04:54 +0000 (0:00:03.353) 0:02:08.529 ****** 2026-01-17 01:07:30.866313 | orchestrator | skipping: [testbed-node-0] 2026-01-17 01:07:30.866321 | orchestrator | skipping: [testbed-node-1] 2026-01-17 01:07:30.866328 | orchestrator | skipping: [testbed-node-3] 2026-01-17 01:07:30.866335 | orchestrator | skipping: [testbed-node-2] 2026-01-17 01:07:30.866342 | orchestrator | skipping: [testbed-node-5] 2026-01-17 01:07:30.866349 | orchestrator | skipping: [testbed-node-4] 2026-01-17 01:07:30.866356 | orchestrator | 2026-01-17 01:07:30.866364 | orchestrator | TASK [neutron : Copying over ovn_agent.ini] ************************************ 2026-01-17 01:07:30.866371 | orchestrator | Saturday 17 January 2026 01:04:56 +0000 (0:00:02.098) 0:02:10.628 ****** 2026-01-17 01:07:30.866378 | orchestrator | skipping: [testbed-node-0] 2026-01-17 01:07:30.866385 | orchestrator | skipping: [testbed-node-1] 2026-01-17 01:07:30.866393 | orchestrator | skipping: [testbed-node-2] 2026-01-17 01:07:30.866400 | orchestrator | skipping: [testbed-node-3] 2026-01-17 01:07:30.866407 | orchestrator | skipping: [testbed-node-4] 2026-01-17 01:07:30.866414 | orchestrator | skipping: [testbed-node-5] 2026-01-17 01:07:30.866422 | orchestrator | 2026-01-17 01:07:30.866429 | orchestrator | TASK [neutron : Copying over nsx.ini] ****************************************** 2026-01-17 01:07:30.866436 | orchestrator | Saturday 17 January 2026 01:04:59 +0000 (0:00:02.655) 0:02:13.283 ****** 2026-01-17 01:07:30.866443 | orchestrator | skipping: [testbed-node-0] 2026-01-17 01:07:30.866457 | orchestrator | skipping: [testbed-node-1] 2026-01-17 01:07:30.866464 | orchestrator | skipping: [testbed-node-4] 2026-01-17 01:07:30.866471 | orchestrator | skipping: [testbed-node-3] 2026-01-17 01:07:30.866479 | orchestrator | skipping: [testbed-node-2] 2026-01-17 01:07:30.866486 | orchestrator | skipping: [testbed-node-5] 2026-01-17 01:07:30.866493 | orchestrator | 2026-01-17 01:07:30.866500 | orchestrator | TASK [neutron : Copy neutron-l3-agent-wrapper script] ************************** 2026-01-17 01:07:30.866507 | orchestrator | Saturday 17 January 2026 01:05:01 +0000 (0:00:02.699) 0:02:15.982 ****** 2026-01-17 01:07:30.866514 | orchestrator | skipping: [testbed-node-0] 2026-01-17 01:07:30.866521 | orchestrator | skipping: [testbed-node-2] 2026-01-17 01:07:30.866529 | orchestrator | skipping: [testbed-node-1] 2026-01-17 01:07:30.866536 | orchestrator | skipping: [testbed-node-3] 2026-01-17 01:07:30.866543 | orchestrator | skipping: [testbed-node-4] 2026-01-17 01:07:30.866550 | orchestrator | skipping: [testbed-node-5] 2026-01-17 01:07:30.866557 | orchestrator | 2026-01-17 01:07:30.866568 | orchestrator | TASK [neutron : Copying over extra ml2 plugins] ******************************** 2026-01-17 01:07:30.866576 | orchestrator | Saturday 17 January 2026 01:05:03 +0000 (0:00:01.933) 0:02:17.916 ****** 2026-01-17 01:07:30.866583 | orchestrator | skipping: [testbed-node-0] 2026-01-17 01:07:30.866591 | orchestrator | skipping: [testbed-node-1] 2026-01-17 01:07:30.866598 | orchestrator | skipping: [testbed-node-2] 2026-01-17 01:07:30.866605 | orchestrator | skipping: [testbed-node-3] 2026-01-17 01:07:30.866612 | orchestrator | skipping: [testbed-node-4] 2026-01-17 01:07:30.866619 | orchestrator | skipping: [testbed-node-5] 2026-01-17 01:07:30.866626 | orchestrator | 2026-01-17 01:07:30.866633 | orchestrator | TASK [neutron : Copying over neutron-tls-proxy.cfg] **************************** 2026-01-17 01:07:30.866641 | orchestrator | Saturday 17 January 2026 01:05:07 +0000 (0:00:03.612) 0:02:21.528 ****** 2026-01-17 01:07:30.866648 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-01-17 01:07:30.866656 | orchestrator | skipping: [testbed-node-0] 2026-01-17 01:07:30.866663 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-01-17 01:07:30.866670 | orchestrator | skipping: [testbed-node-3] 2026-01-17 01:07:30.866678 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-01-17 01:07:30.866685 | orchestrator | skipping: [testbed-node-2] 2026-01-17 01:07:30.866692 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-01-17 01:07:30.866699 | orchestrator | skipping: [testbed-node-1] 2026-01-17 01:07:30.866712 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-01-17 01:07:30.866720 | orchestrator | skipping: [testbed-node-4] 2026-01-17 01:07:30.866727 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-01-17 01:07:30.866734 | orchestrator | skipping: [testbed-node-5] 2026-01-17 01:07:30.866741 | orchestrator | 2026-01-17 01:07:30.866748 | orchestrator | TASK [neutron : Copying over neutron_taas.conf] ******************************** 2026-01-17 01:07:30.866756 | orchestrator | Saturday 17 January 2026 01:05:10 +0000 (0:00:02.921) 0:02:24.450 ****** 2026-01-17 01:07:30.866765 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-01-17 01:07:30.866779 | orchestrator | skipping: [testbed-node-0] 2026-01-17 01:07:30.866788 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-01-17 01:07:30.866795 | orchestrator | skipping: [testbed-node-4] 2026-01-17 01:07:30.866806 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-01-17 01:07:30.866813 | orchestrator | skipping: [testbed-node-1] 2026-01-17 01:07:30.866820 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-01-17 01:07:30.866828 | orchestrator | skipping: [testbed-node-3] 2026-01-17 01:07:30.866841 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-01-17 01:07:30.866849 | orchestrator | skipping: [testbed-node-2] 2026-01-17 01:07:30.866857 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-01-17 01:07:30.866869 | orchestrator | skipping: [testbed-node-5] 2026-01-17 01:07:30.866876 | orchestrator | 2026-01-17 01:07:30.866884 | orchestrator | TASK [neutron : Check neutron containers] ************************************** 2026-01-17 01:07:30.866891 | orchestrator | Saturday 17 January 2026 01:05:12 +0000 (0:00:02.135) 0:02:26.586 ****** 2026-01-17 01:07:30.866898 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-01-17 01:07:30.866909 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-01-17 01:07:30.866922 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-01-17 01:07:30.866930 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-01-17 01:07:30.866942 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-01-17 01:07:30.866949 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-01-17 01:07:30.866955 | orchestrator | 2026-01-17 01:07:30.866962 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2026-01-17 01:07:30.866969 | orchestrator | Saturday 17 January 2026 01:05:15 +0000 (0:00:02.888) 0:02:29.475 ****** 2026-01-17 01:07:30.866977 | orchestrator | skipping: [testbed-node-0] 2026-01-17 01:07:30.866984 | orchestrator | skipping: [testbed-node-1] 2026-01-17 01:07:30.866991 | orchestrator | skipping: [testbed-node-2] 2026-01-17 01:07:30.866999 | orchestrator | skipping: [testbed-node-3] 2026-01-17 01:07:30.867006 | orchestrator | skipping: [testbed-node-4] 2026-01-17 01:07:30.867013 | orchestrator | skipping: [testbed-node-5] 2026-01-17 01:07:30.867020 | orchestrator | 2026-01-17 01:07:30.867027 | orchestrator | TASK [neutron : Creating Neutron database] ************************************* 2026-01-17 01:07:30.867034 | orchestrator | Saturday 17 January 2026 01:05:15 +0000 (0:00:00.504) 0:02:29.979 ****** 2026-01-17 01:07:30.867048 | orchestrator | changed: [testbed-node-0] 2026-01-17 01:07:30.867055 | orchestrator | 2026-01-17 01:07:30.867063 | orchestrator | TASK [neutron : Creating Neutron database user and setting permissions] ******** 2026-01-17 01:07:30.867070 | orchestrator | Saturday 17 January 2026 01:05:18 +0000 (0:00:02.281) 0:02:32.261 ****** 2026-01-17 01:07:30.867076 | orchestrator | changed: [testbed-node-0] 2026-01-17 01:07:30.867083 | orchestrator | 2026-01-17 01:07:30.867090 | orchestrator | TASK [neutron : Running Neutron bootstrap container] *************************** 2026-01-17 01:07:30.867097 | orchestrator | Saturday 17 January 2026 01:05:20 +0000 (0:00:02.610) 0:02:34.871 ****** 2026-01-17 01:07:30.867104 | orchestrator | changed: [testbed-node-0] 2026-01-17 01:07:30.867112 | orchestrator | 2026-01-17 01:07:30.867119 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-01-17 01:07:30.867126 | orchestrator | Saturday 17 January 2026 01:06:03 +0000 (0:00:42.924) 0:03:17.796 ****** 2026-01-17 01:07:30.867133 | orchestrator | 2026-01-17 01:07:30.867140 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-01-17 01:07:30.867160 | orchestrator | Saturday 17 January 2026 01:06:03 +0000 (0:00:00.067) 0:03:17.863 ****** 2026-01-17 01:07:30.867167 | orchestrator | 2026-01-17 01:07:30.867174 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-01-17 01:07:30.867187 | orchestrator | Saturday 17 January 2026 01:06:04 +0000 (0:00:00.367) 0:03:18.230 ****** 2026-01-17 01:07:30.867194 | orchestrator | 2026-01-17 01:07:30.867201 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-01-17 01:07:30.867208 | orchestrator | Saturday 17 January 2026 01:06:04 +0000 (0:00:00.108) 0:03:18.339 ****** 2026-01-17 01:07:30.867216 | orchestrator | 2026-01-17 01:07:30.867228 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-01-17 01:07:30.867236 | orchestrator | Saturday 17 January 2026 01:06:04 +0000 (0:00:00.098) 0:03:18.438 ****** 2026-01-17 01:07:30.867243 | orchestrator | 2026-01-17 01:07:30.867250 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-01-17 01:07:30.867257 | orchestrator | Saturday 17 January 2026 01:06:04 +0000 (0:00:00.141) 0:03:18.579 ****** 2026-01-17 01:07:30.867263 | orchestrator | 2026-01-17 01:07:30.867270 | orchestrator | RUNNING HANDLER [neutron : Restart neutron-server container] ******************* 2026-01-17 01:07:30.867278 | orchestrator | Saturday 17 January 2026 01:06:04 +0000 (0:00:00.175) 0:03:18.754 ****** 2026-01-17 01:07:30.867285 | orchestrator | changed: [testbed-node-0] 2026-01-17 01:07:30.867292 | orchestrator | changed: [testbed-node-1] 2026-01-17 01:07:30.867299 | orchestrator | changed: [testbed-node-2] 2026-01-17 01:07:30.867307 | orchestrator | 2026-01-17 01:07:30.867313 | orchestrator | RUNNING HANDLER [neutron : Restart neutron-ovn-metadata-agent container] ******* 2026-01-17 01:07:30.867320 | orchestrator | Saturday 17 January 2026 01:06:35 +0000 (0:00:30.750) 0:03:49.505 ****** 2026-01-17 01:07:30.867328 | orchestrator | changed: [testbed-node-4] 2026-01-17 01:07:30.867336 | orchestrator | changed: [testbed-node-3] 2026-01-17 01:07:30.867343 | orchestrator | changed: [testbed-node-5] 2026-01-17 01:07:30.867350 | orchestrator | 2026-01-17 01:07:30.867358 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-17 01:07:30.867366 | orchestrator | testbed-node-0 : ok=26  changed=15  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-01-17 01:07:30.867374 | orchestrator | testbed-node-1 : ok=16  changed=8  unreachable=0 failed=0 skipped=31  rescued=0 ignored=0 2026-01-17 01:07:30.867381 | orchestrator | testbed-node-2 : ok=16  changed=8  unreachable=0 failed=0 skipped=31  rescued=0 ignored=0 2026-01-17 01:07:30.867388 | orchestrator | testbed-node-3 : ok=15  changed=7  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-01-17 01:07:30.867395 | orchestrator | testbed-node-4 : ok=15  changed=7  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-01-17 01:07:30.867403 | orchestrator | testbed-node-5 : ok=15  changed=7  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-01-17 01:07:30.867410 | orchestrator | 2026-01-17 01:07:30.867417 | orchestrator | 2026-01-17 01:07:30.867424 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-17 01:07:30.867431 | orchestrator | Saturday 17 January 2026 01:07:28 +0000 (0:00:53.357) 0:04:42.862 ****** 2026-01-17 01:07:30.867438 | orchestrator | =============================================================================== 2026-01-17 01:07:30.867446 | orchestrator | neutron : Restart neutron-ovn-metadata-agent container ----------------- 53.36s 2026-01-17 01:07:30.867453 | orchestrator | neutron : Running Neutron bootstrap container -------------------------- 42.92s 2026-01-17 01:07:30.867460 | orchestrator | neutron : Restart neutron-server container ----------------------------- 30.75s 2026-01-17 01:07:30.867468 | orchestrator | neutron : Copying over neutron.conf ------------------------------------- 8.81s 2026-01-17 01:07:30.867474 | orchestrator | service-ks-register : neutron | Creating endpoints ---------------------- 7.05s 2026-01-17 01:07:30.867481 | orchestrator | service-ks-register : neutron | Granting user roles --------------------- 6.83s 2026-01-17 01:07:30.867493 | orchestrator | neutron : Copying over config.json files for services ------------------- 5.70s 2026-01-17 01:07:30.867500 | orchestrator | service-cert-copy : neutron | Copying over extra CA certificates -------- 4.52s 2026-01-17 01:07:30.867507 | orchestrator | neutron : Copying over ml2_conf.ini ------------------------------------- 4.38s 2026-01-17 01:07:30.867514 | orchestrator | service-ks-register : neutron | Creating users -------------------------- 4.20s 2026-01-17 01:07:30.867525 | orchestrator | neutron : Copying over neutron_ovn_metadata_agent.ini ------------------- 3.97s 2026-01-17 01:07:30.867532 | orchestrator | Setting sysctl values --------------------------------------------------- 3.91s 2026-01-17 01:07:30.867539 | orchestrator | service-ks-register : neutron | Creating services ----------------------- 3.75s 2026-01-17 01:07:30.867546 | orchestrator | service-ks-register : neutron | Creating projects ----------------------- 3.75s 2026-01-17 01:07:30.867553 | orchestrator | neutron : Copying over eswitchd.conf ------------------------------------ 3.70s 2026-01-17 01:07:30.867560 | orchestrator | neutron : Copying over existing policy file ----------------------------- 3.66s 2026-01-17 01:07:30.867567 | orchestrator | neutron : Copying over extra ml2 plugins -------------------------------- 3.61s 2026-01-17 01:07:30.867574 | orchestrator | service-cert-copy : neutron | Copying over backend internal TLS certificate --- 3.61s 2026-01-17 01:07:30.867583 | orchestrator | service-ks-register : neutron | Creating roles -------------------------- 3.41s 2026-01-17 01:07:30.867590 | orchestrator | neutron : Copying over ironic_neutron_agent.ini ------------------------- 3.35s 2026-01-17 01:07:30.867597 | orchestrator | 2026-01-17 01:07:30 | INFO  | Task 6314a902-a206-4763-9570-548a4193bb76 is in state STARTED 2026-01-17 01:07:30.867605 | orchestrator | 2026-01-17 01:07:30 | INFO  | Task 5909bcd5-75ba-42bb-9574-2889166ac98d is in state STARTED 2026-01-17 01:07:30.867612 | orchestrator | 2026-01-17 01:07:30 | INFO  | Task 581d3af9-7e47-4e2b-9cab-6df33ae22e4c is in state STARTED 2026-01-17 01:07:30.867624 | orchestrator | 2026-01-17 01:07:30 | INFO  | Wait 1 second(s) until the next check 2026-01-17 01:07:33.892830 | orchestrator | 2026-01-17 01:07:33 | INFO  | Task 945b4aeb-9733-435c-98f8-dedd4708352b is in state STARTED 2026-01-17 01:07:33.894686 | orchestrator | 2026-01-17 01:07:33 | INFO  | Task 6314a902-a206-4763-9570-548a4193bb76 is in state STARTED 2026-01-17 01:07:33.894736 | orchestrator | 2026-01-17 01:07:33 | INFO  | Task 5909bcd5-75ba-42bb-9574-2889166ac98d is in state STARTED 2026-01-17 01:07:33.895231 | orchestrator | 2026-01-17 01:07:33 | INFO  | Task 581d3af9-7e47-4e2b-9cab-6df33ae22e4c is in state STARTED 2026-01-17 01:07:33.895260 | orchestrator | 2026-01-17 01:07:33 | INFO  | Wait 1 second(s) until the next check 2026-01-17 01:07:36.929232 | orchestrator | 2026-01-17 01:07:36 | INFO  | Task 945b4aeb-9733-435c-98f8-dedd4708352b is in state STARTED 2026-01-17 01:07:36.929924 | orchestrator | 2026-01-17 01:07:36 | INFO  | Task 6314a902-a206-4763-9570-548a4193bb76 is in state STARTED 2026-01-17 01:07:36.930729 | orchestrator | 2026-01-17 01:07:36 | INFO  | Task 5909bcd5-75ba-42bb-9574-2889166ac98d is in state STARTED 2026-01-17 01:07:36.931504 | orchestrator | 2026-01-17 01:07:36 | INFO  | Task 581d3af9-7e47-4e2b-9cab-6df33ae22e4c is in state STARTED 2026-01-17 01:07:36.931629 | orchestrator | 2026-01-17 01:07:36 | INFO  | Wait 1 second(s) until the next check 2026-01-17 01:07:39.964877 | orchestrator | 2026-01-17 01:07:39 | INFO  | Task 945b4aeb-9733-435c-98f8-dedd4708352b is in state STARTED 2026-01-17 01:07:39.965499 | orchestrator | 2026-01-17 01:07:39 | INFO  | Task 6314a902-a206-4763-9570-548a4193bb76 is in state STARTED 2026-01-17 01:07:39.969086 | orchestrator | 2026-01-17 01:07:39 | INFO  | Task 5909bcd5-75ba-42bb-9574-2889166ac98d is in state STARTED 2026-01-17 01:07:39.969838 | orchestrator | 2026-01-17 01:07:39 | INFO  | Task 581d3af9-7e47-4e2b-9cab-6df33ae22e4c is in state STARTED 2026-01-17 01:07:39.970057 | orchestrator | 2026-01-17 01:07:39 | INFO  | Wait 1 second(s) until the next check 2026-01-17 01:07:43.009477 | orchestrator | 2026-01-17 01:07:43 | INFO  | Task 945b4aeb-9733-435c-98f8-dedd4708352b is in state STARTED 2026-01-17 01:07:43.011540 | orchestrator | 2026-01-17 01:07:43 | INFO  | Task 6314a902-a206-4763-9570-548a4193bb76 is in state STARTED 2026-01-17 01:07:43.011597 | orchestrator | 2026-01-17 01:07:43 | INFO  | Task 5909bcd5-75ba-42bb-9574-2889166ac98d is in state STARTED 2026-01-17 01:07:43.012520 | orchestrator | 2026-01-17 01:07:43 | INFO  | Task 581d3af9-7e47-4e2b-9cab-6df33ae22e4c is in state STARTED 2026-01-17 01:07:43.012536 | orchestrator | 2026-01-17 01:07:43 | INFO  | Wait 1 second(s) until the next check 2026-01-17 01:07:46.084231 | orchestrator | 2026-01-17 01:07:46 | INFO  | Task 945b4aeb-9733-435c-98f8-dedd4708352b is in state STARTED 2026-01-17 01:07:46.086002 | orchestrator | 2026-01-17 01:07:46 | INFO  | Task 6314a902-a206-4763-9570-548a4193bb76 is in state STARTED 2026-01-17 01:07:46.087901 | orchestrator | 2026-01-17 01:07:46 | INFO  | Task 5909bcd5-75ba-42bb-9574-2889166ac98d is in state STARTED 2026-01-17 01:07:46.089907 | orchestrator | 2026-01-17 01:07:46 | INFO  | Task 581d3af9-7e47-4e2b-9cab-6df33ae22e4c is in state STARTED 2026-01-17 01:07:46.089954 | orchestrator | 2026-01-17 01:07:46 | INFO  | Wait 1 second(s) until the next check 2026-01-17 01:07:49.128499 | orchestrator | 2026-01-17 01:07:49 | INFO  | Task 945b4aeb-9733-435c-98f8-dedd4708352b is in state STARTED 2026-01-17 01:07:49.129213 | orchestrator | 2026-01-17 01:07:49 | INFO  | Task 6314a902-a206-4763-9570-548a4193bb76 is in state STARTED 2026-01-17 01:07:49.129716 | orchestrator | 2026-01-17 01:07:49 | INFO  | Task 5909bcd5-75ba-42bb-9574-2889166ac98d is in state STARTED 2026-01-17 01:07:49.131789 | orchestrator | 2026-01-17 01:07:49 | INFO  | Task 581d3af9-7e47-4e2b-9cab-6df33ae22e4c is in state STARTED 2026-01-17 01:07:49.131831 | orchestrator | 2026-01-17 01:07:49 | INFO  | Wait 1 second(s) until the next check 2026-01-17 01:07:52.169524 | orchestrator | 2026-01-17 01:07:52 | INFO  | Task 945b4aeb-9733-435c-98f8-dedd4708352b is in state STARTED 2026-01-17 01:07:52.170225 | orchestrator | 2026-01-17 01:07:52 | INFO  | Task 6314a902-a206-4763-9570-548a4193bb76 is in state STARTED 2026-01-17 01:07:52.171652 | orchestrator | 2026-01-17 01:07:52 | INFO  | Task 5909bcd5-75ba-42bb-9574-2889166ac98d is in state STARTED 2026-01-17 01:07:52.172369 | orchestrator | 2026-01-17 01:07:52 | INFO  | Task 581d3af9-7e47-4e2b-9cab-6df33ae22e4c is in state STARTED 2026-01-17 01:07:52.172388 | orchestrator | 2026-01-17 01:07:52 | INFO  | Wait 1 second(s) until the next check 2026-01-17 01:07:55.209071 | orchestrator | 2026-01-17 01:07:55 | INFO  | Task 945b4aeb-9733-435c-98f8-dedd4708352b is in state STARTED 2026-01-17 01:07:55.211398 | orchestrator | 2026-01-17 01:07:55 | INFO  | Task 6314a902-a206-4763-9570-548a4193bb76 is in state STARTED 2026-01-17 01:07:55.212820 | orchestrator | 2026-01-17 01:07:55 | INFO  | Task 5909bcd5-75ba-42bb-9574-2889166ac98d is in state STARTED 2026-01-17 01:07:55.213966 | orchestrator | 2026-01-17 01:07:55 | INFO  | Task 581d3af9-7e47-4e2b-9cab-6df33ae22e4c is in state STARTED 2026-01-17 01:07:55.213996 | orchestrator | 2026-01-17 01:07:55 | INFO  | Wait 1 second(s) until the next check 2026-01-17 01:07:58.274804 | orchestrator | 2026-01-17 01:07:58 | INFO  | Task 945b4aeb-9733-435c-98f8-dedd4708352b is in state STARTED 2026-01-17 01:07:58.274868 | orchestrator | 2026-01-17 01:07:58 | INFO  | Task 6314a902-a206-4763-9570-548a4193bb76 is in state STARTED 2026-01-17 01:07:58.274891 | orchestrator | 2026-01-17 01:07:58 | INFO  | Task 5909bcd5-75ba-42bb-9574-2889166ac98d is in state STARTED 2026-01-17 01:07:58.274896 | orchestrator | 2026-01-17 01:07:58 | INFO  | Task 581d3af9-7e47-4e2b-9cab-6df33ae22e4c is in state STARTED 2026-01-17 01:07:58.274901 | orchestrator | 2026-01-17 01:07:58 | INFO  | Wait 1 second(s) until the next check 2026-01-17 01:08:01.295441 | orchestrator | 2026-01-17 01:08:01 | INFO  | Task 945b4aeb-9733-435c-98f8-dedd4708352b is in state STARTED 2026-01-17 01:08:01.295561 | orchestrator | 2026-01-17 01:08:01 | INFO  | Task 6314a902-a206-4763-9570-548a4193bb76 is in state STARTED 2026-01-17 01:08:01.296329 | orchestrator | 2026-01-17 01:08:01 | INFO  | Task 5909bcd5-75ba-42bb-9574-2889166ac98d is in state STARTED 2026-01-17 01:08:01.296780 | orchestrator | 2026-01-17 01:08:01 | INFO  | Task 581d3af9-7e47-4e2b-9cab-6df33ae22e4c is in state STARTED 2026-01-17 01:08:01.296800 | orchestrator | 2026-01-17 01:08:01 | INFO  | Wait 1 second(s) until the next check 2026-01-17 01:08:04.337802 | orchestrator | 2026-01-17 01:08:04 | INFO  | Task 945b4aeb-9733-435c-98f8-dedd4708352b is in state STARTED 2026-01-17 01:08:04.337859 | orchestrator | 2026-01-17 01:08:04 | INFO  | Task 6314a902-a206-4763-9570-548a4193bb76 is in state STARTED 2026-01-17 01:08:04.337867 | orchestrator | 2026-01-17 01:08:04 | INFO  | Task 5909bcd5-75ba-42bb-9574-2889166ac98d is in state STARTED 2026-01-17 01:08:04.337874 | orchestrator | 2026-01-17 01:08:04 | INFO  | Task 581d3af9-7e47-4e2b-9cab-6df33ae22e4c is in state STARTED 2026-01-17 01:08:04.337880 | orchestrator | 2026-01-17 01:08:04 | INFO  | Wait 1 second(s) until the next check 2026-01-17 01:08:07.377185 | orchestrator | 2026-01-17 01:08:07 | INFO  | Task 945b4aeb-9733-435c-98f8-dedd4708352b is in state STARTED 2026-01-17 01:08:07.379981 | orchestrator | 2026-01-17 01:08:07 | INFO  | Task 6314a902-a206-4763-9570-548a4193bb76 is in state STARTED 2026-01-17 01:08:07.383460 | orchestrator | 2026-01-17 01:08:07 | INFO  | Task 5909bcd5-75ba-42bb-9574-2889166ac98d is in state STARTED 2026-01-17 01:08:07.385754 | orchestrator | 2026-01-17 01:08:07 | INFO  | Task 581d3af9-7e47-4e2b-9cab-6df33ae22e4c is in state STARTED 2026-01-17 01:08:07.386192 | orchestrator | 2026-01-17 01:08:07 | INFO  | Wait 1 second(s) until the next check 2026-01-17 01:08:10.420248 | orchestrator | 2026-01-17 01:08:10 | INFO  | Task 945b4aeb-9733-435c-98f8-dedd4708352b is in state STARTED 2026-01-17 01:08:10.421324 | orchestrator | 2026-01-17 01:08:10 | INFO  | Task 6314a902-a206-4763-9570-548a4193bb76 is in state STARTED 2026-01-17 01:08:10.423948 | orchestrator | 2026-01-17 01:08:10 | INFO  | Task 5909bcd5-75ba-42bb-9574-2889166ac98d is in state STARTED 2026-01-17 01:08:10.423998 | orchestrator | 2026-01-17 01:08:10 | INFO  | Task 581d3af9-7e47-4e2b-9cab-6df33ae22e4c is in state STARTED 2026-01-17 01:08:10.424008 | orchestrator | 2026-01-17 01:08:10 | INFO  | Wait 1 second(s) until the next check 2026-01-17 01:08:13.458555 | orchestrator | 2026-01-17 01:08:13 | INFO  | Task 945b4aeb-9733-435c-98f8-dedd4708352b is in state STARTED 2026-01-17 01:08:13.460092 | orchestrator | 2026-01-17 01:08:13 | INFO  | Task 6314a902-a206-4763-9570-548a4193bb76 is in state STARTED 2026-01-17 01:08:13.461128 | orchestrator | 2026-01-17 01:08:13 | INFO  | Task 5909bcd5-75ba-42bb-9574-2889166ac98d is in state STARTED 2026-01-17 01:08:13.463113 | orchestrator | 2026-01-17 01:08:13 | INFO  | Task 581d3af9-7e47-4e2b-9cab-6df33ae22e4c is in state STARTED 2026-01-17 01:08:13.463370 | orchestrator | 2026-01-17 01:08:13 | INFO  | Wait 1 second(s) until the next check 2026-01-17 01:08:16.499146 | orchestrator | 2026-01-17 01:08:16 | INFO  | Task 945b4aeb-9733-435c-98f8-dedd4708352b is in state STARTED 2026-01-17 01:08:16.500760 | orchestrator | 2026-01-17 01:08:16 | INFO  | Task 6314a902-a206-4763-9570-548a4193bb76 is in state STARTED 2026-01-17 01:08:16.501528 | orchestrator | 2026-01-17 01:08:16 | INFO  | Task 5909bcd5-75ba-42bb-9574-2889166ac98d is in state STARTED 2026-01-17 01:08:16.502228 | orchestrator | 2026-01-17 01:08:16 | INFO  | Task 581d3af9-7e47-4e2b-9cab-6df33ae22e4c is in state STARTED 2026-01-17 01:08:16.502353 | orchestrator | 2026-01-17 01:08:16 | INFO  | Wait 1 second(s) until the next check 2026-01-17 01:08:19.550598 | orchestrator | 2026-01-17 01:08:19 | INFO  | Task 945b4aeb-9733-435c-98f8-dedd4708352b is in state STARTED 2026-01-17 01:08:19.551226 | orchestrator | 2026-01-17 01:08:19 | INFO  | Task 6314a902-a206-4763-9570-548a4193bb76 is in state STARTED 2026-01-17 01:08:19.552381 | orchestrator | 2026-01-17 01:08:19 | INFO  | Task 5909bcd5-75ba-42bb-9574-2889166ac98d is in state STARTED 2026-01-17 01:08:19.553633 | orchestrator | 2026-01-17 01:08:19 | INFO  | Task 581d3af9-7e47-4e2b-9cab-6df33ae22e4c is in state STARTED 2026-01-17 01:08:19.553690 | orchestrator | 2026-01-17 01:08:19 | INFO  | Wait 1 second(s) until the next check 2026-01-17 01:08:22.590240 | orchestrator | 2026-01-17 01:08:22 | INFO  | Task 945b4aeb-9733-435c-98f8-dedd4708352b is in state STARTED 2026-01-17 01:08:22.591152 | orchestrator | 2026-01-17 01:08:22 | INFO  | Task 6314a902-a206-4763-9570-548a4193bb76 is in state STARTED 2026-01-17 01:08:22.591889 | orchestrator | 2026-01-17 01:08:22 | INFO  | Task 5909bcd5-75ba-42bb-9574-2889166ac98d is in state STARTED 2026-01-17 01:08:22.593081 | orchestrator | 2026-01-17 01:08:22 | INFO  | Task 581d3af9-7e47-4e2b-9cab-6df33ae22e4c is in state STARTED 2026-01-17 01:08:22.593285 | orchestrator | 2026-01-17 01:08:22 | INFO  | Wait 1 second(s) until the next check 2026-01-17 01:08:25.632030 | orchestrator | 2026-01-17 01:08:25 | INFO  | Task 945b4aeb-9733-435c-98f8-dedd4708352b is in state STARTED 2026-01-17 01:08:25.633251 | orchestrator | 2026-01-17 01:08:25 | INFO  | Task 6314a902-a206-4763-9570-548a4193bb76 is in state STARTED 2026-01-17 01:08:25.634270 | orchestrator | 2026-01-17 01:08:25 | INFO  | Task 5909bcd5-75ba-42bb-9574-2889166ac98d is in state STARTED 2026-01-17 01:08:25.635232 | orchestrator | 2026-01-17 01:08:25 | INFO  | Task 581d3af9-7e47-4e2b-9cab-6df33ae22e4c is in state STARTED 2026-01-17 01:08:25.635272 | orchestrator | 2026-01-17 01:08:25 | INFO  | Wait 1 second(s) until the next check 2026-01-17 01:08:28.666372 | orchestrator | 2026-01-17 01:08:28 | INFO  | Task 945b4aeb-9733-435c-98f8-dedd4708352b is in state STARTED 2026-01-17 01:08:28.666878 | orchestrator | 2026-01-17 01:08:28 | INFO  | Task 6314a902-a206-4763-9570-548a4193bb76 is in state STARTED 2026-01-17 01:08:28.668006 | orchestrator | 2026-01-17 01:08:28 | INFO  | Task 5909bcd5-75ba-42bb-9574-2889166ac98d is in state STARTED 2026-01-17 01:08:28.668830 | orchestrator | 2026-01-17 01:08:28 | INFO  | Task 581d3af9-7e47-4e2b-9cab-6df33ae22e4c is in state STARTED 2026-01-17 01:08:28.668936 | orchestrator | 2026-01-17 01:08:28 | INFO  | Wait 1 second(s) until the next check 2026-01-17 01:08:31.703551 | orchestrator | 2026-01-17 01:08:31 | INFO  | Task 945b4aeb-9733-435c-98f8-dedd4708352b is in state STARTED 2026-01-17 01:08:31.703748 | orchestrator | 2026-01-17 01:08:31 | INFO  | Task 6314a902-a206-4763-9570-548a4193bb76 is in state STARTED 2026-01-17 01:08:31.704481 | orchestrator | 2026-01-17 01:08:31 | INFO  | Task 5909bcd5-75ba-42bb-9574-2889166ac98d is in state STARTED 2026-01-17 01:08:31.705060 | orchestrator | 2026-01-17 01:08:31 | INFO  | Task 581d3af9-7e47-4e2b-9cab-6df33ae22e4c is in state STARTED 2026-01-17 01:08:31.705102 | orchestrator | 2026-01-17 01:08:31 | INFO  | Wait 1 second(s) until the next check 2026-01-17 01:08:34.743166 | orchestrator | 2026-01-17 01:08:34 | INFO  | Task 945b4aeb-9733-435c-98f8-dedd4708352b is in state STARTED 2026-01-17 01:08:34.743731 | orchestrator | 2026-01-17 01:08:34 | INFO  | Task 6314a902-a206-4763-9570-548a4193bb76 is in state STARTED 2026-01-17 01:08:34.744618 | orchestrator | 2026-01-17 01:08:34 | INFO  | Task 5909bcd5-75ba-42bb-9574-2889166ac98d is in state STARTED 2026-01-17 01:08:34.746293 | orchestrator | 2026-01-17 01:08:34 | INFO  | Task 581d3af9-7e47-4e2b-9cab-6df33ae22e4c is in state STARTED 2026-01-17 01:08:34.746324 | orchestrator | 2026-01-17 01:08:34 | INFO  | Wait 1 second(s) until the next check 2026-01-17 01:08:37.788800 | orchestrator | 2026-01-17 01:08:37 | INFO  | Task 945b4aeb-9733-435c-98f8-dedd4708352b is in state STARTED 2026-01-17 01:08:37.789886 | orchestrator | 2026-01-17 01:08:37 | INFO  | Task 6314a902-a206-4763-9570-548a4193bb76 is in state STARTED 2026-01-17 01:08:37.791280 | orchestrator | 2026-01-17 01:08:37 | INFO  | Task 5909bcd5-75ba-42bb-9574-2889166ac98d is in state STARTED 2026-01-17 01:08:37.792606 | orchestrator | 2026-01-17 01:08:37 | INFO  | Task 581d3af9-7e47-4e2b-9cab-6df33ae22e4c is in state STARTED 2026-01-17 01:08:37.792773 | orchestrator | 2026-01-17 01:08:37 | INFO  | Wait 1 second(s) until the next check 2026-01-17 01:08:40.835542 | orchestrator | 2026-01-17 01:08:40 | INFO  | Task 945b4aeb-9733-435c-98f8-dedd4708352b is in state STARTED 2026-01-17 01:08:40.837764 | orchestrator | 2026-01-17 01:08:40 | INFO  | Task 6314a902-a206-4763-9570-548a4193bb76 is in state STARTED 2026-01-17 01:08:40.840205 | orchestrator | 2026-01-17 01:08:40 | INFO  | Task 5909bcd5-75ba-42bb-9574-2889166ac98d is in state STARTED 2026-01-17 01:08:40.842279 | orchestrator | 2026-01-17 01:08:40 | INFO  | Task 581d3af9-7e47-4e2b-9cab-6df33ae22e4c is in state STARTED 2026-01-17 01:08:40.842359 | orchestrator | 2026-01-17 01:08:40 | INFO  | Wait 1 second(s) until the next check 2026-01-17 01:08:43.882648 | orchestrator | 2026-01-17 01:08:43 | INFO  | Task 945b4aeb-9733-435c-98f8-dedd4708352b is in state STARTED 2026-01-17 01:08:43.883995 | orchestrator | 2026-01-17 01:08:43 | INFO  | Task 6314a902-a206-4763-9570-548a4193bb76 is in state STARTED 2026-01-17 01:08:43.884695 | orchestrator | 2026-01-17 01:08:43 | INFO  | Task 5909bcd5-75ba-42bb-9574-2889166ac98d is in state STARTED 2026-01-17 01:08:43.885831 | orchestrator | 2026-01-17 01:08:43 | INFO  | Task 581d3af9-7e47-4e2b-9cab-6df33ae22e4c is in state STARTED 2026-01-17 01:08:43.885855 | orchestrator | 2026-01-17 01:08:43 | INFO  | Wait 1 second(s) until the next check 2026-01-17 01:08:46.919851 | orchestrator | 2026-01-17 01:08:46 | INFO  | Task 945b4aeb-9733-435c-98f8-dedd4708352b is in state STARTED 2026-01-17 01:08:46.921704 | orchestrator | 2026-01-17 01:08:46 | INFO  | Task 6314a902-a206-4763-9570-548a4193bb76 is in state STARTED 2026-01-17 01:08:46.924262 | orchestrator | 2026-01-17 01:08:46 | INFO  | Task 5909bcd5-75ba-42bb-9574-2889166ac98d is in state STARTED 2026-01-17 01:08:46.926789 | orchestrator | 2026-01-17 01:08:46 | INFO  | Task 581d3af9-7e47-4e2b-9cab-6df33ae22e4c is in state STARTED 2026-01-17 01:08:46.927462 | orchestrator | 2026-01-17 01:08:46 | INFO  | Wait 1 second(s) until the next check 2026-01-17 01:08:49.970197 | orchestrator | 2026-01-17 01:08:49 | INFO  | Task 945b4aeb-9733-435c-98f8-dedd4708352b is in state STARTED 2026-01-17 01:08:49.971208 | orchestrator | 2026-01-17 01:08:49 | INFO  | Task 6314a902-a206-4763-9570-548a4193bb76 is in state STARTED 2026-01-17 01:08:49.972847 | orchestrator | 2026-01-17 01:08:49 | INFO  | Task 5909bcd5-75ba-42bb-9574-2889166ac98d is in state STARTED 2026-01-17 01:08:49.973613 | orchestrator | 2026-01-17 01:08:49 | INFO  | Task 581d3af9-7e47-4e2b-9cab-6df33ae22e4c is in state STARTED 2026-01-17 01:08:49.973745 | orchestrator | 2026-01-17 01:08:49 | INFO  | Wait 1 second(s) until the next check 2026-01-17 01:08:53.027799 | orchestrator | 2026-01-17 01:08:53 | INFO  | Task 945b4aeb-9733-435c-98f8-dedd4708352b is in state STARTED 2026-01-17 01:08:53.029922 | orchestrator | 2026-01-17 01:08:53 | INFO  | Task 6314a902-a206-4763-9570-548a4193bb76 is in state STARTED 2026-01-17 01:08:53.032226 | orchestrator | 2026-01-17 01:08:53 | INFO  | Task 5909bcd5-75ba-42bb-9574-2889166ac98d is in state STARTED 2026-01-17 01:08:53.034233 | orchestrator | 2026-01-17 01:08:53 | INFO  | Task 581d3af9-7e47-4e2b-9cab-6df33ae22e4c is in state STARTED 2026-01-17 01:08:53.034492 | orchestrator | 2026-01-17 01:08:53 | INFO  | Wait 1 second(s) until the next check 2026-01-17 01:08:56.086255 | orchestrator | 2026-01-17 01:08:56 | INFO  | Task 945b4aeb-9733-435c-98f8-dedd4708352b is in state STARTED 2026-01-17 01:08:56.088224 | orchestrator | 2026-01-17 01:08:56 | INFO  | Task 6314a902-a206-4763-9570-548a4193bb76 is in state STARTED 2026-01-17 01:08:56.089661 | orchestrator | 2026-01-17 01:08:56 | INFO  | Task 5909bcd5-75ba-42bb-9574-2889166ac98d is in state STARTED 2026-01-17 01:08:56.091539 | orchestrator | 2026-01-17 01:08:56 | INFO  | Task 581d3af9-7e47-4e2b-9cab-6df33ae22e4c is in state STARTED 2026-01-17 01:08:56.091580 | orchestrator | 2026-01-17 01:08:56 | INFO  | Wait 1 second(s) until the next check 2026-01-17 01:08:59.149409 | orchestrator | 2026-01-17 01:08:59 | INFO  | Task 945b4aeb-9733-435c-98f8-dedd4708352b is in state STARTED 2026-01-17 01:08:59.149538 | orchestrator | 2026-01-17 01:08:59 | INFO  | Task 6314a902-a206-4763-9570-548a4193bb76 is in state STARTED 2026-01-17 01:08:59.154283 | orchestrator | 2026-01-17 01:08:59 | INFO  | Task 5909bcd5-75ba-42bb-9574-2889166ac98d is in state STARTED 2026-01-17 01:08:59.157016 | orchestrator | 2026-01-17 01:08:59 | INFO  | Task 581d3af9-7e47-4e2b-9cab-6df33ae22e4c is in state STARTED 2026-01-17 01:08:59.157299 | orchestrator | 2026-01-17 01:08:59 | INFO  | Wait 1 second(s) until the next check 2026-01-17 01:09:02.213507 | orchestrator | 2026-01-17 01:09:02 | INFO  | Task 945b4aeb-9733-435c-98f8-dedd4708352b is in state STARTED 2026-01-17 01:09:02.216341 | orchestrator | 2026-01-17 01:09:02 | INFO  | Task 6314a902-a206-4763-9570-548a4193bb76 is in state STARTED 2026-01-17 01:09:02.217100 | orchestrator | 2026-01-17 01:09:02 | INFO  | Task 5909bcd5-75ba-42bb-9574-2889166ac98d is in state STARTED 2026-01-17 01:09:02.220276 | orchestrator | 2026-01-17 01:09:02 | INFO  | Task 581d3af9-7e47-4e2b-9cab-6df33ae22e4c is in state STARTED 2026-01-17 01:09:02.220311 | orchestrator | 2026-01-17 01:09:02 | INFO  | Wait 1 second(s) until the next check 2026-01-17 01:09:05.273211 | orchestrator | 2026-01-17 01:09:05 | INFO  | Task 945b4aeb-9733-435c-98f8-dedd4708352b is in state STARTED 2026-01-17 01:09:05.275349 | orchestrator | 2026-01-17 01:09:05 | INFO  | Task 6314a902-a206-4763-9570-548a4193bb76 is in state STARTED 2026-01-17 01:09:05.277761 | orchestrator | 2026-01-17 01:09:05 | INFO  | Task 5909bcd5-75ba-42bb-9574-2889166ac98d is in state STARTED 2026-01-17 01:09:05.279164 | orchestrator | 2026-01-17 01:09:05 | INFO  | Task 581d3af9-7e47-4e2b-9cab-6df33ae22e4c is in state STARTED 2026-01-17 01:09:05.279230 | orchestrator | 2026-01-17 01:09:05 | INFO  | Wait 1 second(s) until the next check 2026-01-17 01:09:08.337173 | orchestrator | 2026-01-17 01:09:08 | INFO  | Task 945b4aeb-9733-435c-98f8-dedd4708352b is in state STARTED 2026-01-17 01:09:08.339372 | orchestrator | 2026-01-17 01:09:08 | INFO  | Task 6314a902-a206-4763-9570-548a4193bb76 is in state STARTED 2026-01-17 01:09:08.342092 | orchestrator | 2026-01-17 01:09:08 | INFO  | Task 5909bcd5-75ba-42bb-9574-2889166ac98d is in state STARTED 2026-01-17 01:09:08.343662 | orchestrator | 2026-01-17 01:09:08 | INFO  | Task 581d3af9-7e47-4e2b-9cab-6df33ae22e4c is in state STARTED 2026-01-17 01:09:08.343773 | orchestrator | 2026-01-17 01:09:08 | INFO  | Wait 1 second(s) until the next check 2026-01-17 01:09:11.390058 | orchestrator | 2026-01-17 01:09:11.390129 | orchestrator | 2026-01-17 01:09:11 | INFO  | Task 945b4aeb-9733-435c-98f8-dedd4708352b is in state SUCCESS 2026-01-17 01:09:11.391873 | orchestrator | 2026-01-17 01:09:11.391921 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-17 01:09:11.391931 | orchestrator | 2026-01-17 01:09:11.391939 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-01-17 01:09:11.391950 | orchestrator | Saturday 17 January 2026 01:05:56 +0000 (0:00:00.302) 0:00:00.302 ****** 2026-01-17 01:09:11.392114 | orchestrator | ok: [testbed-manager] 2026-01-17 01:09:11.392127 | orchestrator | ok: [testbed-node-0] 2026-01-17 01:09:11.392226 | orchestrator | ok: [testbed-node-1] 2026-01-17 01:09:11.392241 | orchestrator | ok: [testbed-node-2] 2026-01-17 01:09:11.392469 | orchestrator | ok: [testbed-node-3] 2026-01-17 01:09:11.392485 | orchestrator | ok: [testbed-node-4] 2026-01-17 01:09:11.392492 | orchestrator | ok: [testbed-node-5] 2026-01-17 01:09:11.392497 | orchestrator | 2026-01-17 01:09:11.392504 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-17 01:09:11.392510 | orchestrator | Saturday 17 January 2026 01:05:57 +0000 (0:00:00.936) 0:00:01.239 ****** 2026-01-17 01:09:11.392516 | orchestrator | ok: [testbed-manager] => (item=enable_prometheus_True) 2026-01-17 01:09:11.392532 | orchestrator | ok: [testbed-node-0] => (item=enable_prometheus_True) 2026-01-17 01:09:11.392539 | orchestrator | ok: [testbed-node-1] => (item=enable_prometheus_True) 2026-01-17 01:09:11.392544 | orchestrator | ok: [testbed-node-2] => (item=enable_prometheus_True) 2026-01-17 01:09:11.392550 | orchestrator | ok: [testbed-node-3] => (item=enable_prometheus_True) 2026-01-17 01:09:11.392556 | orchestrator | ok: [testbed-node-4] => (item=enable_prometheus_True) 2026-01-17 01:09:11.392562 | orchestrator | ok: [testbed-node-5] => (item=enable_prometheus_True) 2026-01-17 01:09:11.392568 | orchestrator | 2026-01-17 01:09:11.392574 | orchestrator | PLAY [Apply role prometheus] *************************************************** 2026-01-17 01:09:11.392580 | orchestrator | 2026-01-17 01:09:11.392586 | orchestrator | TASK [prometheus : include_tasks] ********************************************** 2026-01-17 01:09:11.392592 | orchestrator | Saturday 17 January 2026 01:05:58 +0000 (0:00:00.884) 0:00:02.124 ****** 2026-01-17 01:09:11.392598 | orchestrator | included: /ansible/roles/prometheus/tasks/deploy.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-01-17 01:09:11.392605 | orchestrator | 2026-01-17 01:09:11.392612 | orchestrator | TASK [prometheus : Ensuring config directories exist] ************************** 2026-01-17 01:09:11.392623 | orchestrator | Saturday 17 January 2026 01:06:00 +0000 (0:00:01.743) 0:00:03.867 ****** 2026-01-17 01:09:11.392712 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-01-17 01:09:11.392760 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-17 01:09:11.392769 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-17 01:09:11.392775 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-17 01:09:11.392942 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-17 01:09:11.392973 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-17 01:09:11.392982 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-17 01:09:11.392988 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-17 01:09:11.392995 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-17 01:09:11.393007 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-17 01:09:11.393013 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-17 01:09:11.393020 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-17 01:09:11.393058 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-17 01:09:11.393094 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-17 01:09:11.393108 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-01-17 01:09:11.393121 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-17 01:09:11.393137 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-17 01:09:11.393147 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-17 01:09:11.393156 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-17 01:09:11.393191 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-17 01:09:11.393208 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-17 01:09:11.393218 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-17 01:09:11.393227 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-17 01:09:11.393244 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-17 01:09:11.393254 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-01-17 01:09:11.393264 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-17 01:09:11.393274 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-01-17 01:09:11.393412 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-01-17 01:09:11.393427 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-17 01:09:11.393433 | orchestrator | 2026-01-17 01:09:11.393440 | orchestrator | TASK [prometheus : include_tasks] ********************************************** 2026-01-17 01:09:11.393446 | orchestrator | Saturday 17 January 2026 01:06:03 +0000 (0:00:03.286) 0:00:07.154 ****** 2026-01-17 01:09:11.393452 | orchestrator | included: /ansible/roles/prometheus/tasks/copy-certs.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-01-17 01:09:11.393458 | orchestrator | 2026-01-17 01:09:11.393464 | orchestrator | TASK [service-cert-copy : prometheus | Copying over extra CA certificates] ***** 2026-01-17 01:09:11.393728 | orchestrator | Saturday 17 January 2026 01:06:05 +0000 (0:00:02.257) 0:00:09.411 ****** 2026-01-17 01:09:11.393750 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-01-17 01:09:11.393762 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-17 01:09:11.393771 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-17 01:09:11.393780 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-17 01:09:11.393815 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-17 01:09:11.393828 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-17 01:09:11.393844 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-17 01:09:11.393861 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-17 01:09:11.394141 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-17 01:09:11.394151 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-17 01:09:11.394157 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-17 01:09:11.394164 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-17 01:09:11.394230 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-17 01:09:11.394244 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-17 01:09:11.394340 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-17 01:09:11.394463 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-01-17 01:09:11.394478 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-17 01:09:11.394489 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-17 01:09:11.394500 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-01-17 01:09:11.394511 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-01-17 01:09:11.394550 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-17 01:09:11.394566 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-17 01:09:11.394588 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-17 01:09:11.394598 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-17 01:09:11.394777 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-01-17 01:09:11.394787 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-17 01:09:11.394793 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-17 01:09:11.394821 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-17 01:09:11.394833 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-17 01:09:11.394845 | orchestrator | 2026-01-17 01:09:11.394852 | orchestrator | TASK [service-cert-copy : prometheus | Copying over backend internal TLS certificate] *** 2026-01-17 01:09:11.394858 | orchestrator | Saturday 17 January 2026 01:06:14 +0000 (0:00:08.206) 0:00:17.617 ****** 2026-01-17 01:09:11.394865 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-01-17 01:09:11.394872 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-17 01:09:11.394878 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-17 01:09:11.394884 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-01-17 01:09:11.394890 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-17 01:09:11.394914 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2026-01-17 01:09:11.394921 | orchestrator | skipping: [testbed-node-0] 2026-01-17 01:09:11.394933 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-01-17 01:09:11.394945 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-01-17 01:09:11.394967 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2026-01-17 01:09:11.394975 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-17 01:09:11.394982 | orchestrator | skipping: [testbed-manager] 2026-01-17 01:09:11.394988 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-01-17 01:09:11.394994 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-17 01:09:11.395020 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-17 01:09:11.395037 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-01-17 01:09:11.395046 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-17 01:09:11.395057 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-01-17 01:09:11.395071 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-17 01:09:11.395084 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-17 01:09:11.395094 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-01-17 01:09:11.395105 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-17 01:09:11.395115 | orchestrator | skipping: [testbed-node-1] 2026-01-17 01:09:11.395130 | orchestrator | skipping: [testbed-node-2] 2026-01-17 01:09:11.395166 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-01-17 01:09:11.395182 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-01-17 01:09:11.395194 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-01-17 01:09:11.395204 | orchestrator | skipping: [testbed-node-4] 2026-01-17 01:09:11.395214 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-01-17 01:09:11.395226 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-01-17 01:09:11.395232 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-01-17 01:09:11.395238 | orchestrator | skipping: [testbed-node-5] 2026-01-17 01:09:11.395244 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-01-17 01:09:11.395255 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-01-17 01:09:11.395283 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-01-17 01:09:11.395291 | orchestrator | skipping: [testbed-node-3] 2026-01-17 01:09:11.395297 | orchestrator | 2026-01-17 01:09:11.395303 | orchestrator | TASK [service-cert-copy : prometheus | Copying over backend internal TLS key] *** 2026-01-17 01:09:11.395309 | orchestrator | Saturday 17 January 2026 01:06:16 +0000 (0:00:02.184) 0:00:19.802 ****** 2026-01-17 01:09:11.395318 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2026-01-17 01:09:11.395325 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-01-17 01:09:11.395331 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-01-17 01:09:11.395337 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2026-01-17 01:09:11.395348 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-17 01:09:11.395372 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-01-17 01:09:11.395383 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-17 01:09:11.395389 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-17 01:09:11.395395 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-01-17 01:09:11.395401 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-17 01:09:11.395407 | orchestrator | skipping: [testbed-manager] 2026-01-17 01:09:11.395415 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-01-17 01:09:11.395426 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-17 01:09:11.395434 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-17 01:09:11.395459 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-01-17 01:09:11.395475 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-17 01:09:11.395486 | orchestrator | skipping: [testbed-node-1] 2026-01-17 01:09:11.395496 | orchestrator | skipping: [testbed-node-0] 2026-01-17 01:09:11.395507 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-01-17 01:09:11.395517 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-17 01:09:11.395528 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-17 01:09:11.395540 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-01-17 01:09:11.395558 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-17 01:09:11.395570 | orchestrator | skipping: [testbed-node-2] 2026-01-17 01:09:11.395610 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-01-17 01:09:11.395624 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-01-17 01:09:11.395636 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-01-17 01:09:11.395644 | orchestrator | skipping: [testbed-node-3] 2026-01-17 01:09:11.395651 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-01-17 01:09:11.395658 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-01-17 01:09:11.395665 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-01-17 01:09:11.395677 | orchestrator | skipping: [testbed-node-4] 2026-01-17 01:09:11.395684 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-01-17 01:09:11.395691 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-01-17 01:09:11.395718 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-01-17 01:09:11.395726 | orchestrator | skipping: [testbed-node-5] 2026-01-17 01:09:11.395733 | orchestrator | 2026-01-17 01:09:11.395740 | orchestrator | TASK [prometheus : Copying over config.json files] ***************************** 2026-01-17 01:09:11.395747 | orchestrator | Saturday 17 January 2026 01:06:18 +0000 (0:00:01.812) 0:00:21.614 ****** 2026-01-17 01:09:11.395757 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-01-17 01:09:11.395765 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-17 01:09:11.395772 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-17 01:09:11.395787 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-17 01:09:11.395793 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-17 01:09:11.395799 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-17 01:09:11.395822 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-17 01:09:11.395830 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-17 01:09:11.395839 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-17 01:09:11.395845 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-17 01:09:11.395852 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-17 01:09:11.395861 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-17 01:09:11.395868 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-17 01:09:11.395874 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-17 01:09:11.395897 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-17 01:09:11.395907 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-17 01:09:11.395914 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-17 01:09:11.395920 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-01-17 01:09:11.395932 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-01-17 01:09:11.395938 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-01-17 01:09:11.395944 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-17 01:09:11.395994 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-01-17 01:09:11.396003 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-17 01:09:11.396013 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-17 01:09:11.396019 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-17 01:09:11.396030 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-17 01:09:11.396036 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-17 01:09:11.396042 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-17 01:09:11.396048 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-17 01:09:11.396054 | orchestrator | 2026-01-17 01:09:11.396061 | orchestrator | TASK [prometheus : Find custom prometheus alert rules files] ******************* 2026-01-17 01:09:11.396068 | orchestrator | Saturday 17 January 2026 01:06:24 +0000 (0:00:06.785) 0:00:28.400 ****** 2026-01-17 01:09:11.396078 | orchestrator | ok: [testbed-manager -> localhost] 2026-01-17 01:09:11.396087 | orchestrator | 2026-01-17 01:09:11.396097 | orchestrator | TASK [prometheus : Copying over custom prometheus alert rules files] *********** 2026-01-17 01:09:11.396133 | orchestrator | Saturday 17 January 2026 01:06:26 +0000 (0:00:01.394) 0:00:29.794 ****** 2026-01-17 01:09:11.396146 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1097401, 'dev': 119, 'nlink': 1, 'atime': 1768521739.0, 'mtime': 1768521739.0, 'ctime': 1768609111.4843488, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-17 01:09:11.396162 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1097401, 'dev': 119, 'nlink': 1, 'atime': 1768521739.0, 'mtime': 1768521739.0, 'ctime': 1768609111.4843488, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-17 01:09:11.396178 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1097401, 'dev': 119, 'nlink': 1, 'atime': 1768521739.0, 'mtime': 1768521739.0, 'ctime': 1768609111.4843488, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-17 01:09:11.396185 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1097401, 'dev': 119, 'nlink': 1, 'atime': 1768521739.0, 'mtime': 1768521739.0, 'ctime': 1768609111.4843488, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-17 01:09:11.396191 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1097441, 'dev': 119, 'nlink': 1, 'atime': 1768521739.0, 'mtime': 1768521739.0, 'ctime': 1768609111.4893734, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-17 01:09:11.396198 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1097401, 'dev': 119, 'nlink': 1, 'atime': 1768521739.0, 'mtime': 1768521739.0, 'ctime': 1768609111.4843488, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-17 01:09:11.396224 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1097401, 'dev': 119, 'nlink': 1, 'atime': 1768521739.0, 'mtime': 1768521739.0, 'ctime': 1768609111.4843488, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-17 01:09:11.396232 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1097401, 'dev': 119, 'nlink': 1, 'atime': 1768521739.0, 'mtime': 1768521739.0, 'ctime': 1768609111.4843488, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-17 01:09:11.396241 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1097441, 'dev': 119, 'nlink': 1, 'atime': 1768521739.0, 'mtime': 1768521739.0, 'ctime': 1768609111.4893734, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-17 01:09:11.396251 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1097441, 'dev': 119, 'nlink': 1, 'atime': 1768521739.0, 'mtime': 1768521739.0, 'ctime': 1768609111.4893734, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-17 01:09:11.396257 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1097441, 'dev': 119, 'nlink': 1, 'atime': 1768521739.0, 'mtime': 1768521739.0, 'ctime': 1768609111.4893734, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-17 01:09:11.396263 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1097388, 'dev': 119, 'nlink': 1, 'atime': 1768521739.0, 'mtime': 1768521739.0, 'ctime': 1768609111.4836211, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-17 01:09:11.396269 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1097441, 'dev': 119, 'nlink': 1, 'atime': 1768521739.0, 'mtime': 1768521739.0, 'ctime': 1768609111.4893734, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-17 01:09:11.396291 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1097388, 'dev': 119, 'nlink': 1, 'atime': 1768521739.0, 'mtime': 1768521739.0, 'ctime': 1768609111.4836211, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-17 01:09:11.396299 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1097388, 'dev': 119, 'nlink': 1, 'atime': 1768521739.0, 'mtime': 1768521739.0, 'ctime': 1768609111.4836211, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-17 01:09:11.396307 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1097423, 'dev': 119, 'nlink': 1, 'atime': 1768521739.0, 'mtime': 1768521739.0, 'ctime': 1768609111.4875114, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-17 01:09:11.396317 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1097441, 'dev': 119, 'nlink': 1, 'atime': 1768521739.0, 'mtime': 1768521739.0, 'ctime': 1768609111.4893734, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-17 01:09:11.396324 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1097423, 'dev': 119, 'nlink': 1, 'atime': 1768521739.0, 'mtime': 1768521739.0, 'ctime': 1768609111.4875114, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-17 01:09:11.396330 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1097441, 'dev': 119, 'nlink': 1, 'atime': 1768521739.0, 'mtime': 1768521739.0, 'ctime': 1768609111.4893734, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-17 01:09:11.396336 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1097388, 'dev': 119, 'nlink': 1, 'atime': 1768521739.0, 'mtime': 1768521739.0, 'ctime': 1768609111.4836211, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-17 01:09:11.396358 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1097388, 'dev': 119, 'nlink': 1, 'atime': 1768521739.0, 'mtime': 1768521739.0, 'ctime': 1768609111.4836211, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-17 01:09:11.396365 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1097423, 'dev': 119, 'nlink': 1, 'atime': 1768521739.0, 'mtime': 1768521739.0, 'ctime': 1768609111.4875114, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-17 01:09:11.396379 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1097379, 'dev': 119, 'nlink': 1, 'atime': 1768521739.0, 'mtime': 1768521739.0, 'ctime': 1768609111.4827933, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-17 01:09:11.396385 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1097379, 'dev': 119, 'nlink': 1, 'atime': 1768521739.0, 'mtime': 1768521739.0, 'ctime': 1768609111.4827933, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-17 01:09:11.396392 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1097388, 'dev': 119, 'nlink': 1, 'atime': 1768521739.0, 'mtime': 1768521739.0, 'ctime': 1768609111.4836211, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-17 01:09:11.396398 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1097423, 'dev': 119, 'nlink': 1, 'atime': 1768521739.0, 'mtime': 1768521739.0, 'ctime': 1768609111.4875114, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-17 01:09:11.396404 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1097405, 'dev': 119, 'nlink': 1, 'atime': 1768521739.0, 'mtime': 1768521739.0, 'ctime': 1768609111.4848886, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-17 01:09:11.396410 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1097379, 'dev': 119, 'nlink': 1, 'atime': 1768521739.0, 'mtime': 1768521739.0, 'ctime': 1768609111.4827933, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-17 01:09:11.396431 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1097405, 'dev': 119, 'nlink': 1, 'atime': 1768521739.0, 'mtime': 1768521739.0, 'ctime': 1768609111.4848886, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-17 01:09:11.396447 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1097423, 'dev': 119, 'nlink': 1, 'atime': 1768521739.0, 'mtime': 1768521739.0, 'ctime': 1768609111.4875114, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-17 01:09:11.396454 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1097423, 'dev': 119, 'nlink': 1, 'atime': 1768521739.0, 'mtime': 1768521739.0, 'ctime': 1768609111.4875114, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-17 01:09:11.396460 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1097405, 'dev': 119, 'nlink': 1, 'atime': 1768521739.0, 'mtime': 1768521739.0, 'ctime': 1768609111.4848886, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-17 01:09:11.396466 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1097379, 'dev': 119, 'nlink': 1, 'atime': 1768521739.0, 'mtime': 1768521739.0, 'ctime': 1768609111.4827933, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-17 01:09:11.396472 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1097414, 'dev': 119, 'nlink': 1, 'atime': 1768521739.0, 'mtime': 1768521739.0, 'ctime': 1768609111.486703, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-17 01:09:11.396478 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1097414, 'dev': 119, 'nlink': 1, 'atime': 1768521739.0, 'mtime': 1768521739.0, 'ctime': 1768609111.486703, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-17 01:09:11.396500 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1097388, 'dev': 119, 'nlink': 1, 'atime': 1768521739.0, 'mtime': 1768521739.0, 'ctime': 1768609111.4836211, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-17 01:09:11.396514 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1097414, 'dev': 119, 'nlink': 1, 'atime': 1768521739.0, 'mtime': 1768521739.0, 'ctime': 1768609111.486703, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-17 01:09:11.396520 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1097407, 'dev': 119, 'nlink': 1, 'atime': 1768521739.0, 'mtime': 1768521739.0, 'ctime': 1768609111.4848886, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-17 01:09:11.396526 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1097379, 'dev': 119, 'nlink': 1, 'atime': 1768521739.0, 'mtime': 1768521739.0, 'ctime': 1768609111.4827933, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-17 01:09:11.396532 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1097379, 'dev': 119, 'nlink': 1, 'atime': 1768521739.0, 'mtime': 1768521739.0, 'ctime': 1768609111.4827933, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-17 01:09:11.396538 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1097405, 'dev': 119, 'nlink': 1, 'atime': 1768521739.0, 'mtime': 1768521739.0, 'ctime': 1768609111.4848886, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-17 01:09:11.396544 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1097407, 'dev': 119, 'nlink': 1, 'atime': 1768521739.0, 'mtime': 1768521739.0, 'ctime': 1768609111.4848886, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-17 01:09:11.396565 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1097405, 'dev': 119, 'nlink': 1, 'atime': 1768521739.0, 'mtime': 1768521739.0, 'ctime': 1768609111.4848886, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-17 01:09:11.396580 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1097396, 'dev': 119, 'nlink': 1, 'atime': 1768521739.0, 'mtime': 1768521739.0, 'ctime': 1768609111.4842408, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-17 01:09:11.396590 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1097405, 'dev': 119, 'nlink': 1, 'atime': 1768521739.0, 'mtime': 1768521739.0, 'ctime': 1768609111.4848886, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-17 01:09:11.396604 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1097407, 'dev': 119, 'nlink': 1, 'atime': 1768521739.0, 'mtime': 1768521739.0, 'ctime': 1768609111.4848886, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-17 01:09:11.396618 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1097414, 'dev': 119, 'nlink': 1, 'atime': 1768521739.0, 'mtime': 1768521739.0, 'ctime': 1768609111.486703, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-17 01:09:11.396629 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1097414, 'dev': 119, 'nlink': 1, 'atime': 1768521739.0, 'mtime': 1768521739.0, 'ctime': 1768609111.486703, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-17 01:09:11.396638 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1097396, 'dev': 119, 'nlink': 1, 'atime': 1768521739.0, 'mtime': 1768521739.0, 'ctime': 1768609111.4842408, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-17 01:09:11.396684 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1097414, 'dev': 119, 'nlink': 1, 'atime': 1768521739.0, 'mtime': 1768521739.0, 'ctime': 1768609111.486703, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-17 01:09:11.396701 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1097437, 'dev': 119, 'nlink': 1, 'atime': 1768521739.0, 'mtime': 1768521739.0, 'ctime': 1768609111.489123, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-17 01:09:11.396713 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1097396, 'dev': 119, 'nlink': 1, 'atime': 1768521739.0, 'mtime': 1768521739.0, 'ctime': 1768609111.4842408, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-17 01:09:11.396723 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1097407, 'dev': 119, 'nlink': 1, 'atime': 1768521739.0, 'mtime': 1768521739.0, 'ctime': 1768609111.4848886, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-17 01:09:11.396732 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1097407, 'dev': 119, 'nlink': 1, 'atime': 1768521739.0, 'mtime': 1768521739.0, 'ctime': 1768609111.4848886, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-17 01:09:11.396739 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1097437, 'dev': 119, 'nlink': 1, 'atime': 1768521739.0, 'mtime': 1768521739.0, 'ctime': 1768609111.489123, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-17 01:09:11.396745 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1097396, 'dev': 119, 'nlink': 1, 'atime': 1768521739.0, 'mtime': 1768521739.0, 'ctime': 1768609111.4842408, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-17 01:09:11.396774 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1097423, 'dev': 119, 'nlink': 1, 'atime': 1768521739.0, 'mtime': 1768521739.0, 'ctime': 1768609111.4875114, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-17 01:09:11.396784 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1097407, 'dev': 119, 'nlink': 1, 'atime': 1768521739.0, 'mtime': 1768521739.0, 'ctime': 1768609111.4848886, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-17 01:09:11.396791 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1097437, 'dev': 119, 'nlink': 1, 'atime': 1768521739.0, 'mtime': 1768521739.0, 'ctime': 1768609111.489123, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-17 01:09:11.396797 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1097368, 'dev': 119, 'nlink': 1, 'atime': 1768521739.0, 'mtime': 1768521739.0, 'ctime': 1768609111.4808867, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-17 01:09:11.396803 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1097437, 'dev': 119, 'nlink': 1, 'atime': 1768521739.0, 'mtime': 1768521739.0, 'ctime': 1768609111.489123, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-17 01:09:11.396809 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1097396, 'dev': 119, 'nlink': 1, 'atime': 1768521739.0, 'mtime': 1768521739.0, 'ctime': 1768609111.4842408, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-17 01:09:11.396815 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1097368, 'dev': 119, 'nlink': 1, 'atime': 1768521739.0, 'mtime': 1768521739.0, 'ctime': 1768609111.4808867, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-17 01:09:11.396841 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1097368, 'dev': 119, 'nlink': 1, 'atime': 1768521739.0, 'mtime': 1768521739.0, 'ctime': 1768609111.4808867, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-17 01:09:11.396851 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1097396, 'dev': 119, 'nlink': 1, 'atime': 1768521739.0, 'mtime': 1768521739.0, 'ctime': 1768609111.4842408, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-17 01:09:11.396857 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1097368, 'dev': 119, 'nlink': 1, 'atime': 1768521739.0, 'mtime': 1768521739.0, 'ctime': 1768609111.4808867, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-17 01:09:11.396863 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1097453, 'dev': 119, 'nlink': 1, 'atime': 1768521739.0, 'mtime': 1768521739.0, 'ctime': 1768609111.4910035, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-17 01:09:11.396870 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1097437, 'dev': 119, 'nlink': 1, 'atime': 1768521739.0, 'mtime': 1768521739.0, 'ctime': 1768609111.489123, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-17 01:09:11.396876 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1097453, 'dev': 119, 'nlink': 1, 'atime': 1768521739.0, 'mtime': 1768521739.0, 'ctime': 1768609111.4910035, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-17 01:09:11.396885 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1097453, 'dev': 119, 'nlink': 1, 'atime': 1768521739.0, 'mtime': 1768521739.0, 'ctime': 1768609111.4910035, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-17 01:09:11.396910 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1097379, 'dev': 119, 'nlink': 1, 'atime': 1768521739.0, 'mtime': 1768521739.0, 'ctime': 1768609111.4827933, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-17 01:09:11.396919 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1097453, 'dev': 119, 'nlink': 1, 'atime': 1768521739.0, 'mtime': 1768521739.0, 'ctime': 1768609111.4910035, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-17 01:09:11.396926 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1097437, 'dev': 119, 'nlink': 1, 'atime': 1768521739.0, 'mtime': 1768521739.0, 'ctime': 1768609111.489123, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-17 01:09:11.396932 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1097368, 'dev': 119, 'nlink': 1, 'atime': 1768521739.0, 'mtime': 1768521739.0, 'ctime': 1768609111.4808867, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-17 01:09:11.396938 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1097433, 'dev': 119, 'nlink': 1, 'atime': 1768521739.0, 'mtime': 1768521739.0, 'ctime': 1768609111.4887934, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-17 01:09:11.396944 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1097433, 'dev': 119, 'nlink': 1, 'atime': 1768521739.0, 'mtime': 1768521739.0, 'ctime': 1768609111.4887934, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-17 01:09:11.396972 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1097433, 'dev': 119, 'nlink': 1, 'atime': 1768521739.0, 'mtime': 1768521739.0, 'ctime': 1768609111.4887934, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-17 01:09:11.396983 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1097368, 'dev': 119, 'nlink': 1, 'atime': 1768521739.0, 'mtime': 1768521739.0, 'ctime': 1768609111.4808867, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-17 01:09:11.396994 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1097433, 'dev': 119, 'nlink': 1, 'atime': 1768521739.0, 'mtime': 1768521739.0, 'ctime': 1768609111.4887934, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-17 01:09:11.397000 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1097453, 'dev': 119, 'nlink': 1, 'atime': 1768521739.0, 'mtime': 1768521739.0, 'ctime': 1768609111.4910035, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-17 01:09:11.397006 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1097385, 'dev': 119, 'nlink': 1, 'atime': 1768521739.0, 'mtime': 1768521739.0, 'ctime': 1768609111.4827933, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-17 01:09:11.397012 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1097385, 'dev': 119, 'nlink': 1, 'atime': 1768521739.0, 'mtime': 1768521739.0, 'ctime': 1768609111.4827933, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-17 01:09:11.397018 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1097433, 'dev': 119, 'nlink': 1, 'atime': 1768521739.0, 'mtime': 1768521739.0, 'ctime': 1768609111.4887934, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-17 01:09:11.397030 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1097385, 'dev': 119, 'nlink': 1, 'atime': 1768521739.0, 'mtime': 1768521739.0, 'ctime': 1768609111.4827933, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-17 01:09:11.397047 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1097405, 'dev': 119, 'nlink': 1, 'atime': 1768521739.0, 'mtime': 1768521739.0, 'ctime': 1768609111.4848886, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-17 01:09:11.397062 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1097385, 'dev': 119, 'nlink': 1, 'atime': 1768521739.0, 'mtime': 1768521739.0, 'ctime': 1768609111.4827933, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-17 01:09:11.397072 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1097371, 'dev': 119, 'nlink': 1, 'atime': 1768521739.0, 'mtime': 1768521739.0, 'ctime': 1768609111.4822745, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-17 01:09:11.397081 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1097453, 'dev': 119, 'nlink': 1, 'atime': 1768521739.0, 'mtime': 1768521739.0, 'ctime': 1768609111.4910035, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-17 01:09:11.397091 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1097385, 'dev': 119, 'nlink': 1, 'atime': 1768521739.0, 'mtime': 1768521739.0, 'ctime': 1768609111.4827933, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-17 01:09:11.397101 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1097371, 'dev': 119, 'nlink': 1, 'atime': 1768521739.0, 'mtime': 1768521739.0, 'ctime': 1768609111.4822745, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-17 01:09:11.397116 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1097371, 'dev': 119, 'nlink': 1, 'atime': 1768521739.0, 'mtime': 1768521739.0, 'ctime': 1768609111.4822745, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-17 01:09:11.397131 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1097371, 'dev': 119, 'nlink': 1, 'atime': 1768521739.0, 'mtime': 1768521739.0, 'ctime': 1768609111.4822745, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-17 01:09:11.397144 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1097433, 'dev': 119, 'nlink': 1, 'atime': 1768521739.0, 'mtime': 1768521739.0, 'ctime': 1768609111.4887934, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-17 01:09:11.397153 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1097409, 'dev': 119, 'nlink': 1, 'atime': 1768521739.0, 'mtime': 1768521739.0, 'ctime': 1768609111.486041, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-17 01:09:11.397163 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1097414, 'dev': 119, 'nlink': 1, 'atime': 1768521739.0, 'mtime': 1768521739.0, 'ctime': 1768609111.486703, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-17 01:09:11.397172 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1097371, 'dev': 119, 'nlink': 1, 'atime': 1768521739.0, 'mtime': 1768521739.0, 'ctime': 1768609111.4822745, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-17 01:09:11.397191 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1097409, 'dev': 119, 'nlink': 1, 'atime': 1768521739.0, 'mtime': 1768521739.0, 'ctime': 1768609111.486041, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-17 01:09:11.397202 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1097409, 'dev': 119, 'nlink': 1, 'atime': 1768521739.0, 'mtime': 1768521739.0, 'ctime': 1768609111.486041, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-17 01:09:11.397218 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1097409, 'dev': 119, 'nlink': 1, 'atime': 1768521739.0, 'mtime': 1768521739.0, 'ctime': 1768609111.486041, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-17 01:09:11.397233 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1097385, 'dev': 119, 'nlink': 1, 'atime': 1768521739.0, 'mtime': 1768521739.0, 'ctime': 1768609111.4827933, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-17 01:09:11.397244 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1097408, 'dev': 119, 'nlink': 1, 'atime': 1768521739.0, 'mtime': 1768521739.0, 'ctime': 1768609111.485447, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-17 01:09:11.397253 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1097408, 'dev': 119, 'nlink': 1, 'atime': 1768521739.0, 'mtime': 1768521739.0, 'ctime': 1768609111.485447, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-17 01:09:11.397263 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1097408, 'dev': 119, 'nlink': 1, 'atime': 1768521739.0, 'mtime': 1768521739.0, 'ctime': 1768609111.485447, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-17 01:09:11.397277 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1097409, 'dev': 119, 'nlink': 1, 'atime': 1768521739.0, 'mtime': 1768521739.0, 'ctime': 1768609111.486041, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-17 01:09:11.397288 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1097408, 'dev': 119, 'nlink': 1, 'atime': 1768521739.0, 'mtime': 1768521739.0, 'ctime': 1768609111.485447, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-17 01:09:11.397302 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1097451, 'dev': 119, 'nlink': 1, 'atime': 1768521739.0, 'mtime': 1768521739.0, 'ctime': 1768609111.4902716, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-17 01:09:11.397312 | orchestrator | skipping: [testbed-node-0] 2026-01-17 01:09:11.397326 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1097451, 'dev': 119, 'nlink': 1, 'atime': 1768521739.0, 'mtime': 1768521739.0, 'ctime': 1768609111.4902716, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-17 01:09:11.397335 | orchestrator | skipping: [testbed-node-4] 2026-01-17 01:09:11.397343 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1097371, 'dev': 119, 'nlink': 1, 'atime': 1768521739.0, 'mtime': 1768521739.0, 'ctime': 1768609111.4822745, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-17 01:09:11.397352 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1097451, 'dev': 119, 'nlink': 1, 'atime': 1768521739.0, 'mtime': 1768521739.0, 'ctime': 1768609111.4902716, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-17 01:09:11.397368 | orchestrator | skipping: [testbed-node-1] 2026-01-17 01:09:11.397377 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1097451, 'dev': 119, 'nlink': 1, 'atime': 1768521739.0, 'mtime': 1768521739.0, 'ctime': 1768609111.4902716, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-17 01:09:11.397387 | orchestrator | skipping: [testbed-node-2] 2026-01-17 01:09:11.397396 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1097407, 'dev': 119, 'nlink': 1, 'atime': 1768521739.0, 'mtime': 1768521739.0, 'ctime': 1768609111.4848886, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-17 01:09:11.397404 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1097408, 'dev': 119, 'nlink': 1, 'atime': 1768521739.0, 'mtime': 1768521739.0, 'ctime': 1768609111.485447, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-17 01:09:11.397418 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1097409, 'dev': 119, 'nlink': 1, 'atime': 1768521739.0, 'mtime': 1768521739.0, 'ctime': 1768609111.486041, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-17 01:09:11.397431 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1097451, 'dev': 119, 'nlink': 1, 'atime': 1768521739.0, 'mtime': 1768521739.0, 'ctime': 1768609111.4902716, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-17 01:09:11.397440 | orchestrator | skipping: [testbed-node-3] 2026-01-17 01:09:11.397450 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1097408, 'dev': 119, 'nlink': 1, 'atime': 1768521739.0, 'mtime': 1768521739.0, 'ctime': 1768609111.485447, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-17 01:09:11.397460 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1097451, 'dev': 119, 'nlink': 1, 'atime': 1768521739.0, 'mtime': 1768521739.0, 'ctime': 1768609111.4902716, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-17 01:09:11.397474 | orchestrator | skipping: [testbed-node-5] 2026-01-17 01:09:11.397483 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1097396, 'dev': 119, 'nlink': 1, 'atime': 1768521739.0, 'mtime': 1768521739.0, 'ctime': 1768609111.4842408, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-17 01:09:11.397493 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1097437, 'dev': 119, 'nlink': 1, 'atime': 1768521739.0, 'mtime': 1768521739.0, 'ctime': 1768609111.489123, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-17 01:09:11.397502 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1097368, 'dev': 119, 'nlink': 1, 'atime': 1768521739.0, 'mtime': 1768521739.0, 'ctime': 1768609111.4808867, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-17 01:09:11.397518 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1097453, 'dev': 119, 'nlink': 1, 'atime': 1768521739.0, 'mtime': 1768521739.0, 'ctime': 1768609111.4910035, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-17 01:09:11.397532 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1097433, 'dev': 119, 'nlink': 1, 'atime': 1768521739.0, 'mtime': 1768521739.0, 'ctime': 1768609111.4887934, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-17 01:09:11.397542 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1097385, 'dev': 119, 'nlink': 1, 'atime': 1768521739.0, 'mtime': 1768521739.0, 'ctime': 1768609111.4827933, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-17 01:09:11.397557 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1097371, 'dev': 119, 'nlink': 1, 'atime': 1768521739.0, 'mtime': 1768521739.0, 'ctime': 1768609111.4822745, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-17 01:09:11.397568 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1097409, 'dev': 119, 'nlink': 1, 'atime': 1768521739.0, 'mtime': 1768521739.0, 'ctime': 1768609111.486041, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-17 01:09:11.397579 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1097408, 'dev': 119, 'nlink': 1, 'atime': 1768521739.0, 'mtime': 1768521739.0, 'ctime': 1768609111.485447, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-17 01:09:11.397589 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1097451, 'dev': 119, 'nlink': 1, 'atime': 1768521739.0, 'mtime': 1768521739.0, 'ctime': 1768609111.4902716, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-17 01:09:11.397599 | orchestrator | 2026-01-17 01:09:11.397610 | orchestrator | TASK [prometheus : Find prometheus common config overrides] ******************** 2026-01-17 01:09:11.397621 | orchestrator | Saturday 17 January 2026 01:06:55 +0000 (0:00:29.143) 0:00:58.937 ****** 2026-01-17 01:09:11.397631 | orchestrator | ok: [testbed-manager -> localhost] 2026-01-17 01:09:11.397641 | orchestrator | 2026-01-17 01:09:11.397657 | orchestrator | TASK [prometheus : Find prometheus host config overrides] ********************** 2026-01-17 01:09:11.397668 | orchestrator | Saturday 17 January 2026 01:06:56 +0000 (0:00:00.759) 0:00:59.697 ****** 2026-01-17 01:09:11.397677 | orchestrator | [WARNING]: Skipped 2026-01-17 01:09:11.397688 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-01-17 01:09:11.397695 | orchestrator | manager/prometheus.yml.d' path due to this access issue: 2026-01-17 01:09:11.397701 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-01-17 01:09:11.397708 | orchestrator | manager/prometheus.yml.d' is not a directory 2026-01-17 01:09:11.397718 | orchestrator | ok: [testbed-manager -> localhost] 2026-01-17 01:09:11.397727 | orchestrator | [WARNING]: Skipped 2026-01-17 01:09:11.397737 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-01-17 01:09:11.397748 | orchestrator | node-0/prometheus.yml.d' path due to this access issue: 2026-01-17 01:09:11.397763 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-01-17 01:09:11.397773 | orchestrator | node-0/prometheus.yml.d' is not a directory 2026-01-17 01:09:11.397784 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-01-17 01:09:11.397804 | orchestrator | [WARNING]: Skipped 2026-01-17 01:09:11.397810 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-01-17 01:09:11.397816 | orchestrator | node-1/prometheus.yml.d' path due to this access issue: 2026-01-17 01:09:11.397822 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-01-17 01:09:11.397827 | orchestrator | node-1/prometheus.yml.d' is not a directory 2026-01-17 01:09:11.397833 | orchestrator | [WARNING]: Skipped 2026-01-17 01:09:11.397839 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-01-17 01:09:11.397845 | orchestrator | node-2/prometheus.yml.d' path due to this access issue: 2026-01-17 01:09:11.397850 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-01-17 01:09:11.397856 | orchestrator | node-2/prometheus.yml.d' is not a directory 2026-01-17 01:09:11.397862 | orchestrator | [WARNING]: Skipped 2026-01-17 01:09:11.397868 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-01-17 01:09:11.397874 | orchestrator | node-3/prometheus.yml.d' path due to this access issue: 2026-01-17 01:09:11.397879 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-01-17 01:09:11.397885 | orchestrator | node-3/prometheus.yml.d' is not a directory 2026-01-17 01:09:11.397891 | orchestrator | [WARNING]: Skipped 2026-01-17 01:09:11.397896 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-01-17 01:09:11.397902 | orchestrator | node-4/prometheus.yml.d' path due to this access issue: 2026-01-17 01:09:11.397908 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-01-17 01:09:11.397919 | orchestrator | node-4/prometheus.yml.d' is not a directory 2026-01-17 01:09:11.397929 | orchestrator | [WARNING]: Skipped 2026-01-17 01:09:11.397938 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-01-17 01:09:11.397948 | orchestrator | node-5/prometheus.yml.d' path due to this access issue: 2026-01-17 01:09:11.397976 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-01-17 01:09:11.397986 | orchestrator | node-5/prometheus.yml.d' is not a directory 2026-01-17 01:09:11.397995 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-01-17 01:09:11.398004 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-01-17 01:09:11.398042 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-01-17 01:09:11.398055 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-01-17 01:09:11.398064 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-01-17 01:09:11.398074 | orchestrator | 2026-01-17 01:09:11.398084 | orchestrator | TASK [prometheus : Copying over prometheus config file] ************************ 2026-01-17 01:09:11.398094 | orchestrator | Saturday 17 January 2026 01:06:57 +0000 (0:00:01.750) 0:01:01.447 ****** 2026-01-17 01:09:11.398103 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-01-17 01:09:11.398114 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-01-17 01:09:11.398123 | orchestrator | skipping: [testbed-node-0] 2026-01-17 01:09:11.398133 | orchestrator | skipping: [testbed-node-1] 2026-01-17 01:09:11.398142 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-01-17 01:09:11.398152 | orchestrator | skipping: [testbed-node-2] 2026-01-17 01:09:11.398163 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-01-17 01:09:11.398172 | orchestrator | skipping: [testbed-node-3] 2026-01-17 01:09:11.398181 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-01-17 01:09:11.398191 | orchestrator | skipping: [testbed-node-4] 2026-01-17 01:09:11.398201 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-01-17 01:09:11.398211 | orchestrator | skipping: [testbed-node-5] 2026-01-17 01:09:11.398231 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2) 2026-01-17 01:09:11.398242 | orchestrator | 2026-01-17 01:09:11.398251 | orchestrator | TASK [prometheus : Copying over prometheus web config file] ******************** 2026-01-17 01:09:11.398261 | orchestrator | Saturday 17 January 2026 01:07:14 +0000 (0:00:16.924) 0:01:18.372 ****** 2026-01-17 01:09:11.398272 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-01-17 01:09:11.398292 | orchestrator | skipping: [testbed-node-1] 2026-01-17 01:09:11.398302 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-01-17 01:09:11.398311 | orchestrator | skipping: [testbed-node-0] 2026-01-17 01:09:11.398322 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-01-17 01:09:11.398332 | orchestrator | skipping: [testbed-node-2] 2026-01-17 01:09:11.398342 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-01-17 01:09:11.398352 | orchestrator | skipping: [testbed-node-4] 2026-01-17 01:09:11.398362 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-01-17 01:09:11.398372 | orchestrator | skipping: [testbed-node-3] 2026-01-17 01:09:11.398388 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-01-17 01:09:11.398399 | orchestrator | skipping: [testbed-node-5] 2026-01-17 01:09:11.398410 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2) 2026-01-17 01:09:11.398416 | orchestrator | 2026-01-17 01:09:11.398422 | orchestrator | TASK [prometheus : Copying over prometheus alertmanager config file] *********** 2026-01-17 01:09:11.398427 | orchestrator | Saturday 17 January 2026 01:07:18 +0000 (0:00:03.653) 0:01:22.025 ****** 2026-01-17 01:09:11.398433 | orchestrator | skipping: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-01-17 01:09:11.398440 | orchestrator | skipping: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-01-17 01:09:11.398446 | orchestrator | skipping: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-01-17 01:09:11.398452 | orchestrator | skipping: [testbed-node-1] 2026-01-17 01:09:11.398458 | orchestrator | skipping: [testbed-node-0] 2026-01-17 01:09:11.398464 | orchestrator | skipping: [testbed-node-2] 2026-01-17 01:09:11.398470 | orchestrator | skipping: [testbed-node-3] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-01-17 01:09:11.398475 | orchestrator | skipping: [testbed-node-3] 2026-01-17 01:09:11.398481 | orchestrator | skipping: [testbed-node-5] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-01-17 01:09:11.398487 | orchestrator | skipping: [testbed-node-5] 2026-01-17 01:09:11.398493 | orchestrator | skipping: [testbed-node-4] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-01-17 01:09:11.398499 | orchestrator | skipping: [testbed-node-4] 2026-01-17 01:09:11.398505 | orchestrator | changed: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml) 2026-01-17 01:09:11.398511 | orchestrator | 2026-01-17 01:09:11.398517 | orchestrator | TASK [prometheus : Find custom Alertmanager alert notification templates] ****** 2026-01-17 01:09:11.398523 | orchestrator | Saturday 17 January 2026 01:07:20 +0000 (0:00:01.692) 0:01:23.718 ****** 2026-01-17 01:09:11.398528 | orchestrator | ok: [testbed-manager -> localhost] 2026-01-17 01:09:11.398534 | orchestrator | 2026-01-17 01:09:11.398540 | orchestrator | TASK [prometheus : Copying over custom Alertmanager alert notification templates] *** 2026-01-17 01:09:11.398551 | orchestrator | Saturday 17 January 2026 01:07:20 +0000 (0:00:00.683) 0:01:24.402 ****** 2026-01-17 01:09:11.398557 | orchestrator | skipping: [testbed-manager] 2026-01-17 01:09:11.398567 | orchestrator | skipping: [testbed-node-0] 2026-01-17 01:09:11.398576 | orchestrator | skipping: [testbed-node-1] 2026-01-17 01:09:11.398586 | orchestrator | skipping: [testbed-node-2] 2026-01-17 01:09:11.398595 | orchestrator | skipping: [testbed-node-3] 2026-01-17 01:09:11.398605 | orchestrator | skipping: [testbed-node-4] 2026-01-17 01:09:11.398615 | orchestrator | skipping: [testbed-node-5] 2026-01-17 01:09:11.398626 | orchestrator | 2026-01-17 01:09:11.398636 | orchestrator | TASK [prometheus : Copying over my.cnf for mysqld_exporter] ******************** 2026-01-17 01:09:11.398646 | orchestrator | Saturday 17 January 2026 01:07:21 +0000 (0:00:00.638) 0:01:25.041 ****** 2026-01-17 01:09:11.398654 | orchestrator | skipping: [testbed-manager] 2026-01-17 01:09:11.398659 | orchestrator | skipping: [testbed-node-3] 2026-01-17 01:09:11.398665 | orchestrator | skipping: [testbed-node-5] 2026-01-17 01:09:11.398671 | orchestrator | skipping: [testbed-node-4] 2026-01-17 01:09:11.398676 | orchestrator | changed: [testbed-node-0] 2026-01-17 01:09:11.398682 | orchestrator | changed: [testbed-node-2] 2026-01-17 01:09:11.398687 | orchestrator | changed: [testbed-node-1] 2026-01-17 01:09:11.398693 | orchestrator | 2026-01-17 01:09:11.398699 | orchestrator | TASK [prometheus : Copying cloud config file for openstack exporter] *********** 2026-01-17 01:09:11.398705 | orchestrator | Saturday 17 January 2026 01:07:23 +0000 (0:00:02.254) 0:01:27.295 ****** 2026-01-17 01:09:11.398711 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-01-17 01:09:11.398719 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-01-17 01:09:11.398729 | orchestrator | skipping: [testbed-node-1] 2026-01-17 01:09:11.398739 | orchestrator | skipping: [testbed-node-0] 2026-01-17 01:09:11.398748 | orchestrator | skipping: [testbed-manager] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-01-17 01:09:11.398758 | orchestrator | skipping: [testbed-manager] 2026-01-17 01:09:11.398769 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-01-17 01:09:11.398779 | orchestrator | skipping: [testbed-node-2] 2026-01-17 01:09:11.398795 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-01-17 01:09:11.398805 | orchestrator | skipping: [testbed-node-3] 2026-01-17 01:09:11.398814 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-01-17 01:09:11.398820 | orchestrator | skipping: [testbed-node-5] 2026-01-17 01:09:11.398826 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-01-17 01:09:11.398831 | orchestrator | skipping: [testbed-node-4] 2026-01-17 01:09:11.398837 | orchestrator | 2026-01-17 01:09:11.398843 | orchestrator | TASK [prometheus : Copying config file for blackbox exporter] ****************** 2026-01-17 01:09:11.398848 | orchestrator | Saturday 17 January 2026 01:07:26 +0000 (0:00:02.472) 0:01:29.768 ****** 2026-01-17 01:09:11.398854 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-01-17 01:09:11.398860 | orchestrator | skipping: [testbed-node-0] 2026-01-17 01:09:11.398869 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-01-17 01:09:11.398876 | orchestrator | skipping: [testbed-node-1] 2026-01-17 01:09:11.398881 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-01-17 01:09:11.398887 | orchestrator | skipping: [testbed-node-2] 2026-01-17 01:09:11.398893 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-01-17 01:09:11.398899 | orchestrator | skipping: [testbed-node-3] 2026-01-17 01:09:11.398905 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-01-17 01:09:11.398915 | orchestrator | skipping: [testbed-node-4] 2026-01-17 01:09:11.398921 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-01-17 01:09:11.398927 | orchestrator | skipping: [testbed-node-5] 2026-01-17 01:09:11.398933 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2) 2026-01-17 01:09:11.398938 | orchestrator | 2026-01-17 01:09:11.398944 | orchestrator | TASK [prometheus : Find extra prometheus server config files] ****************** 2026-01-17 01:09:11.398950 | orchestrator | Saturday 17 January 2026 01:07:28 +0000 (0:00:02.042) 0:01:31.811 ****** 2026-01-17 01:09:11.399006 | orchestrator | [WARNING]: Skipped 2026-01-17 01:09:11.399017 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/extras/' path 2026-01-17 01:09:11.399025 | orchestrator | due to this access issue: 2026-01-17 01:09:11.399032 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/extras/' is 2026-01-17 01:09:11.399041 | orchestrator | not a directory 2026-01-17 01:09:11.399051 | orchestrator | ok: [testbed-manager -> localhost] 2026-01-17 01:09:11.399061 | orchestrator | 2026-01-17 01:09:11.399071 | orchestrator | TASK [prometheus : Create subdirectories for extra config files] *************** 2026-01-17 01:09:11.399080 | orchestrator | Saturday 17 January 2026 01:07:29 +0000 (0:00:01.078) 0:01:32.889 ****** 2026-01-17 01:09:11.399091 | orchestrator | skipping: [testbed-manager] 2026-01-17 01:09:11.399097 | orchestrator | skipping: [testbed-node-0] 2026-01-17 01:09:11.399103 | orchestrator | skipping: [testbed-node-1] 2026-01-17 01:09:11.399109 | orchestrator | skipping: [testbed-node-2] 2026-01-17 01:09:11.399115 | orchestrator | skipping: [testbed-node-3] 2026-01-17 01:09:11.399120 | orchestrator | skipping: [testbed-node-4] 2026-01-17 01:09:11.399126 | orchestrator | skipping: [testbed-node-5] 2026-01-17 01:09:11.399132 | orchestrator | 2026-01-17 01:09:11.399138 | orchestrator | TASK [prometheus : Template extra prometheus server config files] ************** 2026-01-17 01:09:11.399144 | orchestrator | Saturday 17 January 2026 01:07:30 +0000 (0:00:01.312) 0:01:34.202 ****** 2026-01-17 01:09:11.399150 | orchestrator | skipping: [testbed-manager] 2026-01-17 01:09:11.399155 | orchestrator | skipping: [testbed-node-0] 2026-01-17 01:09:11.399161 | orchestrator | skipping: [testbed-node-1] 2026-01-17 01:09:11.399167 | orchestrator | skipping: [testbed-node-2] 2026-01-17 01:09:11.399172 | orchestrator | skipping: [testbed-node-3] 2026-01-17 01:09:11.399178 | orchestrator | skipping: [testbed-node-4] 2026-01-17 01:09:11.399184 | orchestrator | skipping: [testbed-node-5] 2026-01-17 01:09:11.399190 | orchestrator | 2026-01-17 01:09:11.399195 | orchestrator | TASK [prometheus : Check prometheus containers] ******************************** 2026-01-17 01:09:11.399201 | orchestrator | Saturday 17 January 2026 01:07:31 +0000 (0:00:01.025) 0:01:35.228 ****** 2026-01-17 01:09:11.399208 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-17 01:09:11.399216 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-17 01:09:11.399227 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-17 01:09:11.399248 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-01-17 01:09:11.399259 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-17 01:09:11.399269 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-17 01:09:11.399279 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-17 01:09:11.399290 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-17 01:09:11.399299 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-17 01:09:11.399315 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-17 01:09:11.399332 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-17 01:09:11.399347 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-17 01:09:11.399357 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-17 01:09:11.399368 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-17 01:09:11.399379 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-17 01:09:11.399389 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-17 01:09:11.399399 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-17 01:09:11.399420 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-17 01:09:11.399434 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-01-17 01:09:11.399445 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-01-17 01:09:11.399456 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-17 01:09:11.399466 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-01-17 01:09:11.399478 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-17 01:09:11.399488 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-01-17 01:09:11.399511 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-17 01:09:11.399527 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-17 01:09:11.399538 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-17 01:09:11.399549 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-17 01:09:11.399557 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-17 01:09:11.399563 | orchestrator | 2026-01-17 01:09:11.399569 | orchestrator | TASK [prometheus : Creating prometheus database user and setting permissions] *** 2026-01-17 01:09:11.399575 | orchestrator | Saturday 17 January 2026 01:07:36 +0000 (0:00:04.611) 0:01:39.839 ****** 2026-01-17 01:09:11.399581 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2026-01-17 01:09:11.399587 | orchestrator | skipping: [testbed-manager] 2026-01-17 01:09:11.399593 | orchestrator | 2026-01-17 01:09:11.399598 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-01-17 01:09:11.399604 | orchestrator | Saturday 17 January 2026 01:07:37 +0000 (0:00:01.322) 0:01:41.161 ****** 2026-01-17 01:09:11.399610 | orchestrator | 2026-01-17 01:09:11.399615 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-01-17 01:09:11.399621 | orchestrator | Saturday 17 January 2026 01:07:37 +0000 (0:00:00.068) 0:01:41.230 ****** 2026-01-17 01:09:11.399627 | orchestrator | 2026-01-17 01:09:11.399632 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-01-17 01:09:11.399642 | orchestrator | Saturday 17 January 2026 01:07:37 +0000 (0:00:00.060) 0:01:41.290 ****** 2026-01-17 01:09:11.399647 | orchestrator | 2026-01-17 01:09:11.399653 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-01-17 01:09:11.399659 | orchestrator | Saturday 17 January 2026 01:07:37 +0000 (0:00:00.059) 0:01:41.350 ****** 2026-01-17 01:09:11.399665 | orchestrator | 2026-01-17 01:09:11.399670 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-01-17 01:09:11.399676 | orchestrator | Saturday 17 January 2026 01:07:38 +0000 (0:00:00.181) 0:01:41.531 ****** 2026-01-17 01:09:11.399681 | orchestrator | 2026-01-17 01:09:11.399687 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-01-17 01:09:11.399693 | orchestrator | Saturday 17 January 2026 01:07:38 +0000 (0:00:00.058) 0:01:41.590 ****** 2026-01-17 01:09:11.399699 | orchestrator | 2026-01-17 01:09:11.399704 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-01-17 01:09:11.399714 | orchestrator | Saturday 17 January 2026 01:07:38 +0000 (0:00:00.096) 0:01:41.686 ****** 2026-01-17 01:09:11.399723 | orchestrator | 2026-01-17 01:09:11.399733 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-server container] ************* 2026-01-17 01:09:11.399742 | orchestrator | Saturday 17 January 2026 01:07:38 +0000 (0:00:00.091) 0:01:41.778 ****** 2026-01-17 01:09:11.399750 | orchestrator | changed: [testbed-manager] 2026-01-17 01:09:11.399760 | orchestrator | 2026-01-17 01:09:11.399770 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-node-exporter container] ****** 2026-01-17 01:09:11.399785 | orchestrator | Saturday 17 January 2026 01:07:53 +0000 (0:00:15.431) 0:01:57.209 ****** 2026-01-17 01:09:11.399795 | orchestrator | changed: [testbed-node-1] 2026-01-17 01:09:11.399804 | orchestrator | changed: [testbed-node-0] 2026-01-17 01:09:11.399810 | orchestrator | changed: [testbed-node-2] 2026-01-17 01:09:11.399815 | orchestrator | changed: [testbed-node-5] 2026-01-17 01:09:11.399824 | orchestrator | changed: [testbed-node-3] 2026-01-17 01:09:11.399834 | orchestrator | changed: [testbed-node-4] 2026-01-17 01:09:11.399844 | orchestrator | changed: [testbed-manager] 2026-01-17 01:09:11.399854 | orchestrator | 2026-01-17 01:09:11.399863 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-mysqld-exporter container] **** 2026-01-17 01:09:11.399874 | orchestrator | Saturday 17 January 2026 01:08:09 +0000 (0:00:15.693) 0:02:12.903 ****** 2026-01-17 01:09:11.399884 | orchestrator | changed: [testbed-node-1] 2026-01-17 01:09:11.399894 | orchestrator | changed: [testbed-node-2] 2026-01-17 01:09:11.399904 | orchestrator | changed: [testbed-node-0] 2026-01-17 01:09:11.399915 | orchestrator | 2026-01-17 01:09:11.399921 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-memcached-exporter container] *** 2026-01-17 01:09:11.399937 | orchestrator | Saturday 17 January 2026 01:08:19 +0000 (0:00:09.791) 0:02:22.695 ****** 2026-01-17 01:09:11.399948 | orchestrator | changed: [testbed-node-2] 2026-01-17 01:09:11.400016 | orchestrator | changed: [testbed-node-1] 2026-01-17 01:09:11.400024 | orchestrator | changed: [testbed-node-0] 2026-01-17 01:09:11.400030 | orchestrator | 2026-01-17 01:09:11.400035 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-cadvisor container] *********** 2026-01-17 01:09:11.400041 | orchestrator | Saturday 17 January 2026 01:08:25 +0000 (0:00:06.360) 0:02:29.055 ****** 2026-01-17 01:09:11.400047 | orchestrator | changed: [testbed-node-0] 2026-01-17 01:09:11.400053 | orchestrator | changed: [testbed-node-5] 2026-01-17 01:09:11.400059 | orchestrator | changed: [testbed-manager] 2026-01-17 01:09:11.400065 | orchestrator | changed: [testbed-node-3] 2026-01-17 01:09:11.400071 | orchestrator | changed: [testbed-node-2] 2026-01-17 01:09:11.400076 | orchestrator | changed: [testbed-node-4] 2026-01-17 01:09:11.400082 | orchestrator | changed: [testbed-node-1] 2026-01-17 01:09:11.400088 | orchestrator | 2026-01-17 01:09:11.400094 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-alertmanager container] ******* 2026-01-17 01:09:11.400099 | orchestrator | Saturday 17 January 2026 01:08:41 +0000 (0:00:16.107) 0:02:45.163 ****** 2026-01-17 01:09:11.400105 | orchestrator | changed: [testbed-manager] 2026-01-17 01:09:11.400111 | orchestrator | 2026-01-17 01:09:11.400122 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-elasticsearch-exporter container] *** 2026-01-17 01:09:11.400128 | orchestrator | Saturday 17 January 2026 01:08:49 +0000 (0:00:07.863) 0:02:53.027 ****** 2026-01-17 01:09:11.400133 | orchestrator | changed: [testbed-node-2] 2026-01-17 01:09:11.400139 | orchestrator | changed: [testbed-node-1] 2026-01-17 01:09:11.400145 | orchestrator | changed: [testbed-node-0] 2026-01-17 01:09:11.400151 | orchestrator | 2026-01-17 01:09:11.400157 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-blackbox-exporter container] *** 2026-01-17 01:09:11.400162 | orchestrator | Saturday 17 January 2026 01:08:54 +0000 (0:00:04.747) 0:02:57.775 ****** 2026-01-17 01:09:11.400168 | orchestrator | changed: [testbed-manager] 2026-01-17 01:09:11.400174 | orchestrator | 2026-01-17 01:09:11.400180 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-libvirt-exporter container] *** 2026-01-17 01:09:11.400185 | orchestrator | Saturday 17 January 2026 01:08:59 +0000 (0:00:05.576) 0:03:03.352 ****** 2026-01-17 01:09:11.400191 | orchestrator | changed: [testbed-node-4] 2026-01-17 01:09:11.400197 | orchestrator | changed: [testbed-node-5] 2026-01-17 01:09:11.400203 | orchestrator | changed: [testbed-node-3] 2026-01-17 01:09:11.400208 | orchestrator | 2026-01-17 01:09:11.400214 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-17 01:09:11.400220 | orchestrator | testbed-manager : ok=23  changed=14  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2026-01-17 01:09:11.400226 | orchestrator | testbed-node-0 : ok=15  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2026-01-17 01:09:11.400232 | orchestrator | testbed-node-1 : ok=15  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2026-01-17 01:09:11.400238 | orchestrator | testbed-node-2 : ok=15  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2026-01-17 01:09:11.400244 | orchestrator | testbed-node-3 : ok=12  changed=7  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-01-17 01:09:11.400250 | orchestrator | testbed-node-4 : ok=12  changed=7  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-01-17 01:09:11.400255 | orchestrator | testbed-node-5 : ok=12  changed=7  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-01-17 01:09:11.400261 | orchestrator | 2026-01-17 01:09:11.400267 | orchestrator | 2026-01-17 01:09:11.400273 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-17 01:09:11.400279 | orchestrator | Saturday 17 January 2026 01:09:10 +0000 (0:00:10.671) 0:03:14.023 ****** 2026-01-17 01:09:11.400284 | orchestrator | =============================================================================== 2026-01-17 01:09:11.400290 | orchestrator | prometheus : Copying over custom prometheus alert rules files ---------- 29.14s 2026-01-17 01:09:11.400296 | orchestrator | prometheus : Copying over prometheus config file ----------------------- 16.92s 2026-01-17 01:09:11.400302 | orchestrator | prometheus : Restart prometheus-cadvisor container --------------------- 16.11s 2026-01-17 01:09:11.400307 | orchestrator | prometheus : Restart prometheus-node-exporter container ---------------- 15.69s 2026-01-17 01:09:11.400313 | orchestrator | prometheus : Restart prometheus-server container ----------------------- 15.43s 2026-01-17 01:09:11.400323 | orchestrator | prometheus : Restart prometheus-libvirt-exporter container ------------- 10.67s 2026-01-17 01:09:11.400329 | orchestrator | prometheus : Restart prometheus-mysqld-exporter container --------------- 9.79s 2026-01-17 01:09:11.400335 | orchestrator | service-cert-copy : prometheus | Copying over extra CA certificates ----- 8.21s 2026-01-17 01:09:11.400341 | orchestrator | prometheus : Restart prometheus-alertmanager container ------------------ 7.86s 2026-01-17 01:09:11.400346 | orchestrator | prometheus : Copying over config.json files ----------------------------- 6.79s 2026-01-17 01:09:11.400356 | orchestrator | prometheus : Restart prometheus-memcached-exporter container ------------ 6.36s 2026-01-17 01:09:11.400362 | orchestrator | prometheus : Restart prometheus-blackbox-exporter container ------------- 5.58s 2026-01-17 01:09:11.400368 | orchestrator | prometheus : Restart prometheus-elasticsearch-exporter container -------- 4.75s 2026-01-17 01:09:11.400374 | orchestrator | prometheus : Check prometheus containers -------------------------------- 4.61s 2026-01-17 01:09:11.400383 | orchestrator | prometheus : Copying over prometheus web config file -------------------- 3.65s 2026-01-17 01:09:11.400389 | orchestrator | prometheus : Ensuring config directories exist -------------------------- 3.29s 2026-01-17 01:09:11.400394 | orchestrator | prometheus : Copying cloud config file for openstack exporter ----------- 2.47s 2026-01-17 01:09:11.400400 | orchestrator | prometheus : include_tasks ---------------------------------------------- 2.26s 2026-01-17 01:09:11.400406 | orchestrator | prometheus : Copying over my.cnf for mysqld_exporter -------------------- 2.25s 2026-01-17 01:09:11.400412 | orchestrator | service-cert-copy : prometheus | Copying over backend internal TLS certificate --- 2.18s 2026-01-17 01:09:11.400418 | orchestrator | 2026-01-17 01:09:11 | INFO  | Task 6314a902-a206-4763-9570-548a4193bb76 is in state STARTED 2026-01-17 01:09:11.400424 | orchestrator | 2026-01-17 01:09:11 | INFO  | Task 5909bcd5-75ba-42bb-9574-2889166ac98d is in state STARTED 2026-01-17 01:09:11.400430 | orchestrator | 2026-01-17 01:09:11 | INFO  | Task 581d3af9-7e47-4e2b-9cab-6df33ae22e4c is in state STARTED 2026-01-17 01:09:11.400435 | orchestrator | 2026-01-17 01:09:11 | INFO  | Wait 1 second(s) until the next check 2026-01-17 01:09:14.438291 | orchestrator | 2026-01-17 01:09:14 | INFO  | Task 8d62683c-dbd0-429d-9062-afd0fb05298d is in state STARTED 2026-01-17 01:09:14.439175 | orchestrator | 2026-01-17 01:09:14 | INFO  | Task 6314a902-a206-4763-9570-548a4193bb76 is in state STARTED 2026-01-17 01:09:14.443325 | orchestrator | 2026-01-17 01:09:14 | INFO  | Task 5909bcd5-75ba-42bb-9574-2889166ac98d is in state STARTED 2026-01-17 01:09:14.444695 | orchestrator | 2026-01-17 01:09:14 | INFO  | Task 581d3af9-7e47-4e2b-9cab-6df33ae22e4c is in state STARTED 2026-01-17 01:09:14.444761 | orchestrator | 2026-01-17 01:09:14 | INFO  | Wait 1 second(s) until the next check 2026-01-17 01:09:17.492694 | orchestrator | 2026-01-17 01:09:17 | INFO  | Task 8d62683c-dbd0-429d-9062-afd0fb05298d is in state STARTED 2026-01-17 01:09:17.493316 | orchestrator | 2026-01-17 01:09:17 | INFO  | Task 6314a902-a206-4763-9570-548a4193bb76 is in state STARTED 2026-01-17 01:09:17.494094 | orchestrator | 2026-01-17 01:09:17 | INFO  | Task 5909bcd5-75ba-42bb-9574-2889166ac98d is in state STARTED 2026-01-17 01:09:17.495827 | orchestrator | 2026-01-17 01:09:17 | INFO  | Task 581d3af9-7e47-4e2b-9cab-6df33ae22e4c is in state STARTED 2026-01-17 01:09:17.495862 | orchestrator | 2026-01-17 01:09:17 | INFO  | Wait 1 second(s) until the next check 2026-01-17 01:09:20.533540 | orchestrator | 2026-01-17 01:09:20 | INFO  | Task 8d62683c-dbd0-429d-9062-afd0fb05298d is in state STARTED 2026-01-17 01:09:20.533590 | orchestrator | 2026-01-17 01:09:20 | INFO  | Task 6314a902-a206-4763-9570-548a4193bb76 is in state STARTED 2026-01-17 01:09:20.533595 | orchestrator | 2026-01-17 01:09:20 | INFO  | Task 5909bcd5-75ba-42bb-9574-2889166ac98d is in state STARTED 2026-01-17 01:09:20.533600 | orchestrator | 2026-01-17 01:09:20 | INFO  | Task 581d3af9-7e47-4e2b-9cab-6df33ae22e4c is in state STARTED 2026-01-17 01:09:20.533604 | orchestrator | 2026-01-17 01:09:20 | INFO  | Wait 1 second(s) until the next check 2026-01-17 01:09:23.568097 | orchestrator | 2026-01-17 01:09:23 | INFO  | Task 8d62683c-dbd0-429d-9062-afd0fb05298d is in state STARTED 2026-01-17 01:09:23.569424 | orchestrator | 2026-01-17 01:09:23 | INFO  | Task 6314a902-a206-4763-9570-548a4193bb76 is in state STARTED 2026-01-17 01:09:23.571625 | orchestrator | 2026-01-17 01:09:23 | INFO  | Task 5909bcd5-75ba-42bb-9574-2889166ac98d is in state STARTED 2026-01-17 01:09:23.573733 | orchestrator | 2026-01-17 01:09:23 | INFO  | Task 581d3af9-7e47-4e2b-9cab-6df33ae22e4c is in state STARTED 2026-01-17 01:09:23.573767 | orchestrator | 2026-01-17 01:09:23 | INFO  | Wait 1 second(s) until the next check 2026-01-17 01:09:26.623073 | orchestrator | 2026-01-17 01:09:26 | INFO  | Task 8d62683c-dbd0-429d-9062-afd0fb05298d is in state STARTED 2026-01-17 01:09:26.623541 | orchestrator | 2026-01-17 01:09:26 | INFO  | Task 6314a902-a206-4763-9570-548a4193bb76 is in state STARTED 2026-01-17 01:09:26.624283 | orchestrator | 2026-01-17 01:09:26 | INFO  | Task 5909bcd5-75ba-42bb-9574-2889166ac98d is in state STARTED 2026-01-17 01:09:26.625068 | orchestrator | 2026-01-17 01:09:26 | INFO  | Task 581d3af9-7e47-4e2b-9cab-6df33ae22e4c is in state STARTED 2026-01-17 01:09:26.625099 | orchestrator | 2026-01-17 01:09:26 | INFO  | Wait 1 second(s) until the next check 2026-01-17 01:09:29.664842 | orchestrator | 2026-01-17 01:09:29 | INFO  | Task 8d62683c-dbd0-429d-9062-afd0fb05298d is in state STARTED 2026-01-17 01:09:29.665398 | orchestrator | 2026-01-17 01:09:29 | INFO  | Task 6314a902-a206-4763-9570-548a4193bb76 is in state STARTED 2026-01-17 01:09:29.666201 | orchestrator | 2026-01-17 01:09:29 | INFO  | Task 5909bcd5-75ba-42bb-9574-2889166ac98d is in state STARTED 2026-01-17 01:09:29.667000 | orchestrator | 2026-01-17 01:09:29 | INFO  | Task 581d3af9-7e47-4e2b-9cab-6df33ae22e4c is in state STARTED 2026-01-17 01:09:29.667089 | orchestrator | 2026-01-17 01:09:29 | INFO  | Wait 1 second(s) until the next check 2026-01-17 01:09:32.715617 | orchestrator | 2026-01-17 01:09:32 | INFO  | Task 8d62683c-dbd0-429d-9062-afd0fb05298d is in state STARTED 2026-01-17 01:09:32.717721 | orchestrator | 2026-01-17 01:09:32 | INFO  | Task 6314a902-a206-4763-9570-548a4193bb76 is in state STARTED 2026-01-17 01:09:32.722508 | orchestrator | 2026-01-17 01:09:32 | INFO  | Task 5909bcd5-75ba-42bb-9574-2889166ac98d is in state STARTED 2026-01-17 01:09:32.724646 | orchestrator | 2026-01-17 01:09:32 | INFO  | Task 581d3af9-7e47-4e2b-9cab-6df33ae22e4c is in state STARTED 2026-01-17 01:09:32.724685 | orchestrator | 2026-01-17 01:09:32 | INFO  | Wait 1 second(s) until the next check 2026-01-17 01:09:35.757066 | orchestrator | 2026-01-17 01:09:35 | INFO  | Task 8d62683c-dbd0-429d-9062-afd0fb05298d is in state STARTED 2026-01-17 01:09:35.757556 | orchestrator | 2026-01-17 01:09:35 | INFO  | Task 6314a902-a206-4763-9570-548a4193bb76 is in state STARTED 2026-01-17 01:09:35.758313 | orchestrator | 2026-01-17 01:09:35 | INFO  | Task 5909bcd5-75ba-42bb-9574-2889166ac98d is in state STARTED 2026-01-17 01:09:35.759897 | orchestrator | 2026-01-17 01:09:35 | INFO  | Task 581d3af9-7e47-4e2b-9cab-6df33ae22e4c is in state STARTED 2026-01-17 01:09:35.759984 | orchestrator | 2026-01-17 01:09:35 | INFO  | Wait 1 second(s) until the next check 2026-01-17 01:09:38.804237 | orchestrator | 2026-01-17 01:09:38 | INFO  | Task 8d62683c-dbd0-429d-9062-afd0fb05298d is in state STARTED 2026-01-17 01:09:38.807786 | orchestrator | 2026-01-17 01:09:38 | INFO  | Task 6314a902-a206-4763-9570-548a4193bb76 is in state STARTED 2026-01-17 01:09:38.810646 | orchestrator | 2026-01-17 01:09:38 | INFO  | Task 5909bcd5-75ba-42bb-9574-2889166ac98d is in state STARTED 2026-01-17 01:09:38.815217 | orchestrator | 2026-01-17 01:09:38 | INFO  | Task 581d3af9-7e47-4e2b-9cab-6df33ae22e4c is in state STARTED 2026-01-17 01:09:38.815270 | orchestrator | 2026-01-17 01:09:38 | INFO  | Wait 1 second(s) until the next check 2026-01-17 01:09:41.861644 | orchestrator | 2026-01-17 01:09:41 | INFO  | Task 8d62683c-dbd0-429d-9062-afd0fb05298d is in state STARTED 2026-01-17 01:09:41.864733 | orchestrator | 2026-01-17 01:09:41 | INFO  | Task 6314a902-a206-4763-9570-548a4193bb76 is in state STARTED 2026-01-17 01:09:41.866969 | orchestrator | 2026-01-17 01:09:41 | INFO  | Task 5909bcd5-75ba-42bb-9574-2889166ac98d is in state STARTED 2026-01-17 01:09:41.870541 | orchestrator | 2026-01-17 01:09:41 | INFO  | Task 581d3af9-7e47-4e2b-9cab-6df33ae22e4c is in state STARTED 2026-01-17 01:09:41.870616 | orchestrator | 2026-01-17 01:09:41 | INFO  | Wait 1 second(s) until the next check 2026-01-17 01:09:44.918188 | orchestrator | 2026-01-17 01:09:44 | INFO  | Task 8d62683c-dbd0-429d-9062-afd0fb05298d is in state STARTED 2026-01-17 01:09:44.920825 | orchestrator | 2026-01-17 01:09:44 | INFO  | Task 6314a902-a206-4763-9570-548a4193bb76 is in state STARTED 2026-01-17 01:09:44.922712 | orchestrator | 2026-01-17 01:09:44 | INFO  | Task 5909bcd5-75ba-42bb-9574-2889166ac98d is in state STARTED 2026-01-17 01:09:44.924366 | orchestrator | 2026-01-17 01:09:44 | INFO  | Task 581d3af9-7e47-4e2b-9cab-6df33ae22e4c is in state STARTED 2026-01-17 01:09:44.924408 | orchestrator | 2026-01-17 01:09:44 | INFO  | Wait 1 second(s) until the next check 2026-01-17 01:09:47.976024 | orchestrator | 2026-01-17 01:09:47 | INFO  | Task 8d62683c-dbd0-429d-9062-afd0fb05298d is in state STARTED 2026-01-17 01:09:47.979155 | orchestrator | 2026-01-17 01:09:47 | INFO  | Task 6314a902-a206-4763-9570-548a4193bb76 is in state STARTED 2026-01-17 01:09:47.981814 | orchestrator | 2026-01-17 01:09:47 | INFO  | Task 5909bcd5-75ba-42bb-9574-2889166ac98d is in state STARTED 2026-01-17 01:09:47.984652 | orchestrator | 2026-01-17 01:09:47 | INFO  | Task 581d3af9-7e47-4e2b-9cab-6df33ae22e4c is in state STARTED 2026-01-17 01:09:47.985208 | orchestrator | 2026-01-17 01:09:47 | INFO  | Wait 1 second(s) until the next check 2026-01-17 01:09:51.039907 | orchestrator | 2026-01-17 01:09:51 | INFO  | Task ea7066e0-613e-4196-ad82-e4a4fdb9c336 is in state STARTED 2026-01-17 01:09:51.042989 | orchestrator | 2026-01-17 01:09:51 | INFO  | Task 8d62683c-dbd0-429d-9062-afd0fb05298d is in state STARTED 2026-01-17 01:09:51.045289 | orchestrator | 2026-01-17 01:09:51 | INFO  | Task 6314a902-a206-4763-9570-548a4193bb76 is in state STARTED 2026-01-17 01:09:51.048723 | orchestrator | 2026-01-17 01:09:51 | INFO  | Task 5909bcd5-75ba-42bb-9574-2889166ac98d is in state SUCCESS 2026-01-17 01:09:51.050180 | orchestrator | 2026-01-17 01:09:51.050231 | orchestrator | 2026-01-17 01:09:51.050238 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-17 01:09:51.050244 | orchestrator | 2026-01-17 01:09:51.050250 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-01-17 01:09:51.050255 | orchestrator | Saturday 17 January 2026 01:06:43 +0000 (0:00:00.550) 0:00:00.550 ****** 2026-01-17 01:09:51.050261 | orchestrator | ok: [testbed-node-0] 2026-01-17 01:09:51.050267 | orchestrator | ok: [testbed-node-1] 2026-01-17 01:09:51.050272 | orchestrator | ok: [testbed-node-2] 2026-01-17 01:09:51.050279 | orchestrator | 2026-01-17 01:09:51.050286 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-17 01:09:51.050293 | orchestrator | Saturday 17 January 2026 01:06:44 +0000 (0:00:00.641) 0:00:01.192 ****** 2026-01-17 01:09:51.050301 | orchestrator | ok: [testbed-node-0] => (item=enable_glance_True) 2026-01-17 01:09:51.050310 | orchestrator | ok: [testbed-node-1] => (item=enable_glance_True) 2026-01-17 01:09:51.050318 | orchestrator | ok: [testbed-node-2] => (item=enable_glance_True) 2026-01-17 01:09:51.050324 | orchestrator | 2026-01-17 01:09:51.050331 | orchestrator | PLAY [Apply role glance] ******************************************************* 2026-01-17 01:09:51.050336 | orchestrator | 2026-01-17 01:09:51.050358 | orchestrator | TASK [glance : include_tasks] ************************************************** 2026-01-17 01:09:51.050364 | orchestrator | Saturday 17 January 2026 01:06:45 +0000 (0:00:00.966) 0:00:02.158 ****** 2026-01-17 01:09:51.050371 | orchestrator | included: /ansible/roles/glance/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-17 01:09:51.050378 | orchestrator | 2026-01-17 01:09:51.050385 | orchestrator | TASK [service-ks-register : glance | Creating services] ************************ 2026-01-17 01:09:51.050390 | orchestrator | Saturday 17 January 2026 01:06:46 +0000 (0:00:00.962) 0:00:03.121 ****** 2026-01-17 01:09:51.050395 | orchestrator | changed: [testbed-node-0] => (item=glance (image)) 2026-01-17 01:09:51.050400 | orchestrator | 2026-01-17 01:09:51.050405 | orchestrator | TASK [service-ks-register : glance | Creating endpoints] *********************** 2026-01-17 01:09:51.050410 | orchestrator | Saturday 17 January 2026 01:06:49 +0000 (0:00:03.859) 0:00:06.980 ****** 2026-01-17 01:09:51.050416 | orchestrator | changed: [testbed-node-0] => (item=glance -> https://api-int.testbed.osism.xyz:9292 -> internal) 2026-01-17 01:09:51.050421 | orchestrator | changed: [testbed-node-0] => (item=glance -> https://api.testbed.osism.xyz:9292 -> public) 2026-01-17 01:09:51.050427 | orchestrator | 2026-01-17 01:09:51.050432 | orchestrator | TASK [service-ks-register : glance | Creating projects] ************************ 2026-01-17 01:09:51.050437 | orchestrator | Saturday 17 January 2026 01:06:56 +0000 (0:00:06.282) 0:00:13.263 ****** 2026-01-17 01:09:51.050443 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-01-17 01:09:51.050448 | orchestrator | 2026-01-17 01:09:51.050455 | orchestrator | TASK [service-ks-register : glance | Creating users] *************************** 2026-01-17 01:09:51.050461 | orchestrator | Saturday 17 January 2026 01:06:59 +0000 (0:00:02.886) 0:00:16.150 ****** 2026-01-17 01:09:51.050466 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-01-17 01:09:51.050472 | orchestrator | changed: [testbed-node-0] => (item=glance -> service) 2026-01-17 01:09:51.050476 | orchestrator | 2026-01-17 01:09:51.050482 | orchestrator | TASK [service-ks-register : glance | Creating roles] *************************** 2026-01-17 01:09:51.050487 | orchestrator | Saturday 17 January 2026 01:07:02 +0000 (0:00:03.624) 0:00:19.774 ****** 2026-01-17 01:09:51.050493 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-01-17 01:09:51.050497 | orchestrator | 2026-01-17 01:09:51.050503 | orchestrator | TASK [service-ks-register : glance | Granting user roles] ********************** 2026-01-17 01:09:51.050508 | orchestrator | Saturday 17 January 2026 01:07:06 +0000 (0:00:03.308) 0:00:23.082 ****** 2026-01-17 01:09:51.050513 | orchestrator | changed: [testbed-node-0] => (item=glance -> service -> admin) 2026-01-17 01:09:51.050518 | orchestrator | 2026-01-17 01:09:51.050524 | orchestrator | TASK [glance : Ensuring config directories exist] ****************************** 2026-01-17 01:09:51.050529 | orchestrator | Saturday 17 January 2026 01:07:09 +0000 (0:00:03.589) 0:00:26.671 ****** 2026-01-17 01:09:51.050556 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-01-17 01:09:51.050570 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-01-17 01:09:51.050576 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-01-17 01:09:51.050583 | orchestrator | 2026-01-17 01:09:51.050589 | orchestrator | TASK [glance : include_tasks] ************************************************** 2026-01-17 01:09:51.050597 | orchestrator | Saturday 17 January 2026 01:07:13 +0000 (0:00:03.440) 0:00:30.112 ****** 2026-01-17 01:09:51.050606 | orchestrator | included: /ansible/roles/glance/tasks/external_ceph.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-17 01:09:51.050621 | orchestrator | 2026-01-17 01:09:51.050626 | orchestrator | TASK [glance : Ensuring glance service ceph config subdir exists] ************** 2026-01-17 01:09:51.050636 | orchestrator | Saturday 17 January 2026 01:07:13 +0000 (0:00:00.703) 0:00:30.815 ****** 2026-01-17 01:09:51.050642 | orchestrator | changed: [testbed-node-2] 2026-01-17 01:09:51.050647 | orchestrator | changed: [testbed-node-1] 2026-01-17 01:09:51.050652 | orchestrator | changed: [testbed-node-0] 2026-01-17 01:09:51.050657 | orchestrator | 2026-01-17 01:09:51.050663 | orchestrator | TASK [glance : Copy over multiple ceph configs for Glance] ********************* 2026-01-17 01:09:51.050670 | orchestrator | Saturday 17 January 2026 01:07:18 +0000 (0:00:04.788) 0:00:35.604 ****** 2026-01-17 01:09:51.050677 | orchestrator | changed: [testbed-node-0] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2026-01-17 01:09:51.050685 | orchestrator | changed: [testbed-node-1] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2026-01-17 01:09:51.050729 | orchestrator | changed: [testbed-node-2] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2026-01-17 01:09:51.050737 | orchestrator | 2026-01-17 01:09:51.050743 | orchestrator | TASK [glance : Copy over ceph Glance keyrings] ********************************* 2026-01-17 01:09:51.050749 | orchestrator | Saturday 17 January 2026 01:07:20 +0000 (0:00:01.924) 0:00:37.529 ****** 2026-01-17 01:09:51.050754 | orchestrator | changed: [testbed-node-0] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2026-01-17 01:09:51.050759 | orchestrator | changed: [testbed-node-1] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2026-01-17 01:09:51.050765 | orchestrator | changed: [testbed-node-2] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2026-01-17 01:09:51.050770 | orchestrator | 2026-01-17 01:09:51.050775 | orchestrator | TASK [glance : Ensuring config directory has correct owner and permission] ***** 2026-01-17 01:09:51.050780 | orchestrator | Saturday 17 January 2026 01:07:21 +0000 (0:00:01.153) 0:00:38.682 ****** 2026-01-17 01:09:51.050786 | orchestrator | ok: [testbed-node-0] 2026-01-17 01:09:51.050790 | orchestrator | ok: [testbed-node-1] 2026-01-17 01:09:51.050795 | orchestrator | ok: [testbed-node-2] 2026-01-17 01:09:51.050801 | orchestrator | 2026-01-17 01:09:51.050806 | orchestrator | TASK [glance : Check if policies shall be overwritten] ************************* 2026-01-17 01:09:51.050811 | orchestrator | Saturday 17 January 2026 01:07:22 +0000 (0:00:00.739) 0:00:39.422 ****** 2026-01-17 01:09:51.050816 | orchestrator | skipping: [testbed-node-0] 2026-01-17 01:09:51.050821 | orchestrator | 2026-01-17 01:09:51.050827 | orchestrator | TASK [glance : Set glance policy file] ***************************************** 2026-01-17 01:09:51.050831 | orchestrator | Saturday 17 January 2026 01:07:22 +0000 (0:00:00.442) 0:00:39.865 ****** 2026-01-17 01:09:51.050836 | orchestrator | skipping: [testbed-node-0] 2026-01-17 01:09:51.050841 | orchestrator | skipping: [testbed-node-1] 2026-01-17 01:09:51.050846 | orchestrator | skipping: [testbed-node-2] 2026-01-17 01:09:51.050851 | orchestrator | 2026-01-17 01:09:51.050856 | orchestrator | TASK [glance : include_tasks] ************************************************** 2026-01-17 01:09:51.050862 | orchestrator | Saturday 17 January 2026 01:07:23 +0000 (0:00:00.327) 0:00:40.192 ****** 2026-01-17 01:09:51.050868 | orchestrator | included: /ansible/roles/glance/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-17 01:09:51.050873 | orchestrator | 2026-01-17 01:09:51.050906 | orchestrator | TASK [service-cert-copy : glance | Copying over extra CA certificates] ********* 2026-01-17 01:09:51.050912 | orchestrator | Saturday 17 January 2026 01:07:23 +0000 (0:00:00.544) 0:00:40.737 ****** 2026-01-17 01:09:51.050924 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-01-17 01:09:51.050944 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-01-17 01:09:51.050950 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-01-17 01:09:51.050959 | orchestrator | 2026-01-17 01:09:51.050964 | orchestrator | TASK [service-cert-copy : glance | Copying over backend internal TLS certificate] *** 2026-01-17 01:09:51.050969 | orchestrator | Saturday 17 January 2026 01:07:29 +0000 (0:00:05.663) 0:00:46.401 ****** 2026-01-17 01:09:51.050980 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-01-17 01:09:51.050986 | orchestrator | skipping: [testbed-node-0] 2026-01-17 01:09:51.050992 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-01-17 01:09:51.051001 | orchestrator | skipping: [testbed-node-1] 2026-01-17 01:09:51.051013 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-01-17 01:09:51.051019 | orchestrator | skipping: [testbed-node-2] 2026-01-17 01:09:51.051024 | orchestrator | 2026-01-17 01:09:51.051029 | orchestrator | TASK [service-cert-copy : glance | Copying over backend internal TLS key] ****** 2026-01-17 01:09:51.051035 | orchestrator | Saturday 17 January 2026 01:07:34 +0000 (0:00:04.947) 0:00:51.349 ****** 2026-01-17 01:09:51.051040 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-01-17 01:09:51.051050 | orchestrator | skipping: [testbed-node-2] 2026-01-17 01:09:51.051058 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-01-17 01:09:51.051063 | orchestrator | skipping: [testbed-node-0] 2026-01-17 01:09:51.051073 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-01-17 01:09:51.051078 | orchestrator | skipping: [testbed-node-1] 2026-01-17 01:09:51.051084 | orchestrator | 2026-01-17 01:09:51.051089 | orchestrator | TASK [glance : Creating TLS backend PEM File] ********************************** 2026-01-17 01:09:51.051094 | orchestrator | Saturday 17 January 2026 01:07:37 +0000 (0:00:03.672) 0:00:55.021 ****** 2026-01-17 01:09:51.051099 | orchestrator | skipping: [testbed-node-1] 2026-01-17 01:09:51.051110 | orchestrator | skipping: [testbed-node-0] 2026-01-17 01:09:51.051115 | orchestrator | skipping: [testbed-node-2] 2026-01-17 01:09:51.051120 | orchestrator | 2026-01-17 01:09:51.051126 | orchestrator | TASK [glance : Copying over config.json files for services] ******************** 2026-01-17 01:09:51.051131 | orchestrator | Saturday 17 January 2026 01:07:41 +0000 (0:00:03.965) 0:00:58.986 ****** 2026-01-17 01:09:51.051137 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-01-17 01:09:51.051149 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-01-17 01:09:51.051156 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-01-17 01:09:51.051165 | orchestrator | 2026-01-17 01:09:51.051171 | orchestrator | TASK [glance : Copying over glance-api.conf] *********************************** 2026-01-17 01:09:51.051177 | orchestrator | Saturday 17 January 2026 01:07:46 +0000 (0:00:04.622) 0:01:03.609 ****** 2026-01-17 01:09:51.051182 | orchestrator | changed: [testbed-node-0] 2026-01-17 01:09:51.051187 | orchestrator | changed: [testbed-node-2] 2026-01-17 01:09:51.051192 | orchestrator | changed: [testbed-node-1] 2026-01-17 01:09:51.051198 | orchestrator | 2026-01-17 01:09:51.051203 | orchestrator | TASK [glance : Copying over glance-cache.conf for glance_api] ****************** 2026-01-17 01:09:51.051208 | orchestrator | Saturday 17 January 2026 01:07:54 +0000 (0:00:08.196) 0:01:11.805 ****** 2026-01-17 01:09:51.051213 | orchestrator | skipping: [testbed-node-0] 2026-01-17 01:09:51.051218 | orchestrator | skipping: [testbed-node-2] 2026-01-17 01:09:51.051221 | orchestrator | skipping: [testbed-node-1] 2026-01-17 01:09:51.051224 | orchestrator | 2026-01-17 01:09:51.051227 | orchestrator | TASK [glance : Copying over glance-swift.conf for glance_api] ****************** 2026-01-17 01:09:51.051230 | orchestrator | Saturday 17 January 2026 01:08:02 +0000 (0:00:08.087) 0:01:19.893 ****** 2026-01-17 01:09:51.051233 | orchestrator | skipping: [testbed-node-1] 2026-01-17 01:09:51.051237 | orchestrator | skipping: [testbed-node-0] 2026-01-17 01:09:51.051242 | orchestrator | skipping: [testbed-node-2] 2026-01-17 01:09:51.051247 | orchestrator | 2026-01-17 01:09:51.051255 | orchestrator | TASK [glance : Copying over glance-image-import.conf] ************************** 2026-01-17 01:09:51.051260 | orchestrator | Saturday 17 January 2026 01:08:06 +0000 (0:00:03.887) 0:01:23.780 ****** 2026-01-17 01:09:51.051265 | orchestrator | skipping: [testbed-node-0] 2026-01-17 01:09:51.051274 | orchestrator | skipping: [testbed-node-2] 2026-01-17 01:09:51.051280 | orchestrator | skipping: [testbed-node-1] 2026-01-17 01:09:51.051285 | orchestrator | 2026-01-17 01:09:51.051290 | orchestrator | TASK [glance : Copying over property-protections-rules.conf] ******************* 2026-01-17 01:09:51.051296 | orchestrator | Saturday 17 January 2026 01:08:11 +0000 (0:00:04.787) 0:01:28.568 ****** 2026-01-17 01:09:51.051301 | orchestrator | skipping: [testbed-node-1] 2026-01-17 01:09:51.051307 | orchestrator | skipping: [testbed-node-2] 2026-01-17 01:09:51.051310 | orchestrator | skipping: [testbed-node-0] 2026-01-17 01:09:51.051313 | orchestrator | 2026-01-17 01:09:51.051317 | orchestrator | TASK [glance : Copying over existing policy file] ****************************** 2026-01-17 01:09:51.051320 | orchestrator | Saturday 17 January 2026 01:08:14 +0000 (0:00:03.294) 0:01:31.862 ****** 2026-01-17 01:09:51.051323 | orchestrator | skipping: [testbed-node-0] 2026-01-17 01:09:51.051327 | orchestrator | skipping: [testbed-node-1] 2026-01-17 01:09:51.051336 | orchestrator | skipping: [testbed-node-2] 2026-01-17 01:09:51.051342 | orchestrator | 2026-01-17 01:09:51.051347 | orchestrator | TASK [glance : Copying over glance-haproxy-tls.cfg] **************************** 2026-01-17 01:09:51.051352 | orchestrator | Saturday 17 January 2026 01:08:15 +0000 (0:00:00.306) 0:01:32.168 ****** 2026-01-17 01:09:51.051358 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2026-01-17 01:09:51.051363 | orchestrator | skipping: [testbed-node-0] 2026-01-17 01:09:51.051368 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2026-01-17 01:09:51.051373 | orchestrator | skipping: [testbed-node-1] 2026-01-17 01:09:51.051378 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2026-01-17 01:09:51.051384 | orchestrator | skipping: [testbed-node-2] 2026-01-17 01:09:51.051389 | orchestrator | 2026-01-17 01:09:51.051394 | orchestrator | TASK [glance : Generating 'hostnqn' file for glance_api] *********************** 2026-01-17 01:09:51.051400 | orchestrator | Saturday 17 January 2026 01:08:18 +0000 (0:00:03.428) 0:01:35.597 ****** 2026-01-17 01:09:51.051405 | orchestrator | changed: [testbed-node-2] 2026-01-17 01:09:51.051411 | orchestrator | changed: [testbed-node-0] 2026-01-17 01:09:51.051416 | orchestrator | changed: [testbed-node-1] 2026-01-17 01:09:51.051421 | orchestrator | 2026-01-17 01:09:51.051427 | orchestrator | TASK [glance : Check glance containers] **************************************** 2026-01-17 01:09:51.051432 | orchestrator | Saturday 17 January 2026 01:08:24 +0000 (0:00:06.245) 0:01:41.842 ****** 2026-01-17 01:09:51.051438 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-01-17 01:09:51.051451 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-01-17 01:09:51.051461 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-01-17 01:09:51.051467 | orchestrator | 2026-01-17 01:09:51.051472 | orchestrator | TASK [glance : include_tasks] ************************************************** 2026-01-17 01:09:51.051477 | orchestrator | Saturday 17 January 2026 01:08:33 +0000 (0:00:08.744) 0:01:50.587 ****** 2026-01-17 01:09:51.051482 | orchestrator | skipping: [testbed-node-0] 2026-01-17 01:09:51.051487 | orchestrator | skipping: [testbed-node-1] 2026-01-17 01:09:51.051493 | orchestrator | skipping: [testbed-node-2] 2026-01-17 01:09:51.051498 | orchestrator | 2026-01-17 01:09:51.051503 | orchestrator | TASK [glance : Creating Glance database] *************************************** 2026-01-17 01:09:51.051508 | orchestrator | Saturday 17 January 2026 01:08:33 +0000 (0:00:00.321) 0:01:50.908 ****** 2026-01-17 01:09:51.051514 | orchestrator | changed: [testbed-node-0] 2026-01-17 01:09:51.051519 | orchestrator | 2026-01-17 01:09:51.051525 | orchestrator | TASK [glance : Creating Glance database user and setting permissions] ********** 2026-01-17 01:09:51.051530 | orchestrator | Saturday 17 January 2026 01:08:36 +0000 (0:00:02.436) 0:01:53.345 ****** 2026-01-17 01:09:51.051535 | orchestrator | changed: [testbed-node-0] 2026-01-17 01:09:51.051540 | orchestrator | 2026-01-17 01:09:51.051545 | orchestrator | TASK [glance : Enable log_bin_trust_function_creators function] **************** 2026-01-17 01:09:51.051551 | orchestrator | Saturday 17 January 2026 01:08:38 +0000 (0:00:02.368) 0:01:55.713 ****** 2026-01-17 01:09:51.051556 | orchestrator | changed: [testbed-node-0] 2026-01-17 01:09:51.051561 | orchestrator | 2026-01-17 01:09:51.051566 | orchestrator | TASK [glance : Running Glance bootstrap container] ***************************** 2026-01-17 01:09:51.051575 | orchestrator | Saturday 17 January 2026 01:08:40 +0000 (0:00:02.120) 0:01:57.833 ****** 2026-01-17 01:09:51.051580 | orchestrator | changed: [testbed-node-0] 2026-01-17 01:09:51.051585 | orchestrator | 2026-01-17 01:09:51.051590 | orchestrator | TASK [glance : Disable log_bin_trust_function_creators function] *************** 2026-01-17 01:09:51.051596 | orchestrator | Saturday 17 January 2026 01:09:11 +0000 (0:00:30.903) 0:02:28.736 ****** 2026-01-17 01:09:51.051601 | orchestrator | changed: [testbed-node-0] 2026-01-17 01:09:51.051606 | orchestrator | 2026-01-17 01:09:51.051614 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2026-01-17 01:09:51.051619 | orchestrator | Saturday 17 January 2026 01:09:14 +0000 (0:00:02.677) 0:02:31.414 ****** 2026-01-17 01:09:51.051624 | orchestrator | 2026-01-17 01:09:51.051632 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2026-01-17 01:09:51.051637 | orchestrator | Saturday 17 January 2026 01:09:14 +0000 (0:00:00.394) 0:02:31.808 ****** 2026-01-17 01:09:51.051642 | orchestrator | 2026-01-17 01:09:51.051647 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2026-01-17 01:09:51.051652 | orchestrator | Saturday 17 January 2026 01:09:14 +0000 (0:00:00.088) 0:02:31.897 ****** 2026-01-17 01:09:51.051657 | orchestrator | 2026-01-17 01:09:51.051663 | orchestrator | RUNNING HANDLER [glance : Restart glance-api container] ************************ 2026-01-17 01:09:51.051668 | orchestrator | Saturday 17 January 2026 01:09:14 +0000 (0:00:00.111) 0:02:32.008 ****** 2026-01-17 01:09:51.051673 | orchestrator | changed: [testbed-node-0] 2026-01-17 01:09:51.051679 | orchestrator | changed: [testbed-node-2] 2026-01-17 01:09:51.051684 | orchestrator | changed: [testbed-node-1] 2026-01-17 01:09:51.051689 | orchestrator | 2026-01-17 01:09:51.051694 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-17 01:09:51.051700 | orchestrator | testbed-node-0 : ok=27  changed=19  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-01-17 01:09:51.051706 | orchestrator | testbed-node-1 : ok=16  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2026-01-17 01:09:51.051711 | orchestrator | testbed-node-2 : ok=16  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2026-01-17 01:09:51.051716 | orchestrator | 2026-01-17 01:09:51.051722 | orchestrator | 2026-01-17 01:09:51.051727 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-17 01:09:51.051732 | orchestrator | Saturday 17 January 2026 01:09:48 +0000 (0:00:33.981) 0:03:05.990 ****** 2026-01-17 01:09:51.051737 | orchestrator | =============================================================================== 2026-01-17 01:09:51.051742 | orchestrator | glance : Restart glance-api container ---------------------------------- 33.98s 2026-01-17 01:09:51.051747 | orchestrator | glance : Running Glance bootstrap container ---------------------------- 30.90s 2026-01-17 01:09:51.051753 | orchestrator | glance : Check glance containers ---------------------------------------- 8.74s 2026-01-17 01:09:51.051758 | orchestrator | glance : Copying over glance-api.conf ----------------------------------- 8.20s 2026-01-17 01:09:51.051763 | orchestrator | glance : Copying over glance-cache.conf for glance_api ------------------ 8.09s 2026-01-17 01:09:51.051768 | orchestrator | service-ks-register : glance | Creating endpoints ----------------------- 6.28s 2026-01-17 01:09:51.051773 | orchestrator | glance : Generating 'hostnqn' file for glance_api ----------------------- 6.25s 2026-01-17 01:09:51.051779 | orchestrator | service-cert-copy : glance | Copying over extra CA certificates --------- 5.66s 2026-01-17 01:09:51.051784 | orchestrator | service-cert-copy : glance | Copying over backend internal TLS certificate --- 4.95s 2026-01-17 01:09:51.051789 | orchestrator | glance : Ensuring glance service ceph config subdir exists -------------- 4.79s 2026-01-17 01:09:51.051794 | orchestrator | glance : Copying over glance-image-import.conf -------------------------- 4.79s 2026-01-17 01:09:51.051803 | orchestrator | glance : Copying over config.json files for services -------------------- 4.62s 2026-01-17 01:09:51.051808 | orchestrator | glance : Creating TLS backend PEM File ---------------------------------- 3.97s 2026-01-17 01:09:51.051814 | orchestrator | glance : Copying over glance-swift.conf for glance_api ------------------ 3.89s 2026-01-17 01:09:51.051819 | orchestrator | service-ks-register : glance | Creating services ------------------------ 3.86s 2026-01-17 01:09:51.051824 | orchestrator | service-cert-copy : glance | Copying over backend internal TLS key ------ 3.67s 2026-01-17 01:09:51.051829 | orchestrator | service-ks-register : glance | Creating users --------------------------- 3.62s 2026-01-17 01:09:51.051835 | orchestrator | service-ks-register : glance | Granting user roles ---------------------- 3.59s 2026-01-17 01:09:51.051840 | orchestrator | glance : Ensuring config directories exist ------------------------------ 3.44s 2026-01-17 01:09:51.051845 | orchestrator | glance : Copying over glance-haproxy-tls.cfg ---------------------------- 3.43s 2026-01-17 01:09:51.053390 | orchestrator | 2026-01-17 01:09:51 | INFO  | Task 581d3af9-7e47-4e2b-9cab-6df33ae22e4c is in state STARTED 2026-01-17 01:09:51.053439 | orchestrator | 2026-01-17 01:09:51 | INFO  | Wait 1 second(s) until the next check 2026-01-17 01:09:54.106373 | orchestrator | 2026-01-17 01:09:54 | INFO  | Task ea7066e0-613e-4196-ad82-e4a4fdb9c336 is in state STARTED 2026-01-17 01:09:54.109171 | orchestrator | 2026-01-17 01:09:54 | INFO  | Task 8d62683c-dbd0-429d-9062-afd0fb05298d is in state STARTED 2026-01-17 01:09:54.112922 | orchestrator | 2026-01-17 01:09:54 | INFO  | Task 6314a902-a206-4763-9570-548a4193bb76 is in state STARTED 2026-01-17 01:09:54.116388 | orchestrator | 2026-01-17 01:09:54 | INFO  | Task 581d3af9-7e47-4e2b-9cab-6df33ae22e4c is in state STARTED 2026-01-17 01:09:54.116429 | orchestrator | 2026-01-17 01:09:54 | INFO  | Wait 1 second(s) until the next check 2026-01-17 01:09:57.160266 | orchestrator | 2026-01-17 01:09:57 | INFO  | Task ea7066e0-613e-4196-ad82-e4a4fdb9c336 is in state STARTED 2026-01-17 01:09:57.162230 | orchestrator | 2026-01-17 01:09:57 | INFO  | Task 8d62683c-dbd0-429d-9062-afd0fb05298d is in state STARTED 2026-01-17 01:09:57.164060 | orchestrator | 2026-01-17 01:09:57 | INFO  | Task 6314a902-a206-4763-9570-548a4193bb76 is in state STARTED 2026-01-17 01:09:57.165734 | orchestrator | 2026-01-17 01:09:57 | INFO  | Task 581d3af9-7e47-4e2b-9cab-6df33ae22e4c is in state STARTED 2026-01-17 01:09:57.165783 | orchestrator | 2026-01-17 01:09:57 | INFO  | Wait 1 second(s) until the next check 2026-01-17 01:10:00.217396 | orchestrator | 2026-01-17 01:10:00 | INFO  | Task ea7066e0-613e-4196-ad82-e4a4fdb9c336 is in state STARTED 2026-01-17 01:10:00.217458 | orchestrator | 2026-01-17 01:10:00 | INFO  | Task 8d62683c-dbd0-429d-9062-afd0fb05298d is in state STARTED 2026-01-17 01:10:00.217468 | orchestrator | 2026-01-17 01:10:00 | INFO  | Task 6314a902-a206-4763-9570-548a4193bb76 is in state STARTED 2026-01-17 01:10:00.218671 | orchestrator | 2026-01-17 01:10:00 | INFO  | Task 581d3af9-7e47-4e2b-9cab-6df33ae22e4c is in state STARTED 2026-01-17 01:10:00.218710 | orchestrator | 2026-01-17 01:10:00 | INFO  | Wait 1 second(s) until the next check 2026-01-17 01:10:03.268927 | orchestrator | 2026-01-17 01:10:03 | INFO  | Task ea7066e0-613e-4196-ad82-e4a4fdb9c336 is in state STARTED 2026-01-17 01:10:03.273284 | orchestrator | 2026-01-17 01:10:03 | INFO  | Task 8d62683c-dbd0-429d-9062-afd0fb05298d is in state STARTED 2026-01-17 01:10:03.276020 | orchestrator | 2026-01-17 01:10:03 | INFO  | Task 6314a902-a206-4763-9570-548a4193bb76 is in state STARTED 2026-01-17 01:10:03.280569 | orchestrator | 2026-01-17 01:10:03 | INFO  | Task 581d3af9-7e47-4e2b-9cab-6df33ae22e4c is in state STARTED 2026-01-17 01:10:03.280649 | orchestrator | 2026-01-17 01:10:03 | INFO  | Wait 1 second(s) until the next check 2026-01-17 01:10:06.308955 | orchestrator | 2026-01-17 01:10:06 | INFO  | Task ea7066e0-613e-4196-ad82-e4a4fdb9c336 is in state STARTED 2026-01-17 01:10:06.310564 | orchestrator | 2026-01-17 01:10:06 | INFO  | Task 8d62683c-dbd0-429d-9062-afd0fb05298d is in state STARTED 2026-01-17 01:10:06.311736 | orchestrator | 2026-01-17 01:10:06 | INFO  | Task 6314a902-a206-4763-9570-548a4193bb76 is in state STARTED 2026-01-17 01:10:06.312637 | orchestrator | 2026-01-17 01:10:06 | INFO  | Task 581d3af9-7e47-4e2b-9cab-6df33ae22e4c is in state STARTED 2026-01-17 01:10:06.312786 | orchestrator | 2026-01-17 01:10:06 | INFO  | Wait 1 second(s) until the next check 2026-01-17 01:10:09.367721 | orchestrator | 2026-01-17 01:10:09 | INFO  | Task ea7066e0-613e-4196-ad82-e4a4fdb9c336 is in state STARTED 2026-01-17 01:10:09.370919 | orchestrator | 2026-01-17 01:10:09 | INFO  | Task 8d62683c-dbd0-429d-9062-afd0fb05298d is in state STARTED 2026-01-17 01:10:09.374524 | orchestrator | 2026-01-17 01:10:09 | INFO  | Task 6314a902-a206-4763-9570-548a4193bb76 is in state STARTED 2026-01-17 01:10:09.377867 | orchestrator | 2026-01-17 01:10:09 | INFO  | Task 581d3af9-7e47-4e2b-9cab-6df33ae22e4c is in state STARTED 2026-01-17 01:10:09.377926 | orchestrator | 2026-01-17 01:10:09 | INFO  | Wait 1 second(s) until the next check 2026-01-17 01:10:12.431781 | orchestrator | 2026-01-17 01:10:12 | INFO  | Task ea7066e0-613e-4196-ad82-e4a4fdb9c336 is in state STARTED 2026-01-17 01:10:12.434164 | orchestrator | 2026-01-17 01:10:12 | INFO  | Task 8d62683c-dbd0-429d-9062-afd0fb05298d is in state STARTED 2026-01-17 01:10:12.435947 | orchestrator | 2026-01-17 01:10:12 | INFO  | Task 6314a902-a206-4763-9570-548a4193bb76 is in state STARTED 2026-01-17 01:10:12.437477 | orchestrator | 2026-01-17 01:10:12 | INFO  | Task 581d3af9-7e47-4e2b-9cab-6df33ae22e4c is in state STARTED 2026-01-17 01:10:12.437516 | orchestrator | 2026-01-17 01:10:12 | INFO  | Wait 1 second(s) until the next check 2026-01-17 01:10:15.479572 | orchestrator | 2026-01-17 01:10:15 | INFO  | Task ea7066e0-613e-4196-ad82-e4a4fdb9c336 is in state STARTED 2026-01-17 01:10:15.482245 | orchestrator | 2026-01-17 01:10:15 | INFO  | Task 8d62683c-dbd0-429d-9062-afd0fb05298d is in state STARTED 2026-01-17 01:10:15.483510 | orchestrator | 2026-01-17 01:10:15 | INFO  | Task 6314a902-a206-4763-9570-548a4193bb76 is in state SUCCESS 2026-01-17 01:10:15.485140 | orchestrator | 2026-01-17 01:10:15.485182 | orchestrator | 2026-01-17 01:10:15.485187 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-17 01:10:15.485191 | orchestrator | 2026-01-17 01:10:15.485195 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-01-17 01:10:15.485199 | orchestrator | Saturday 17 January 2026 01:07:21 +0000 (0:00:00.258) 0:00:00.258 ****** 2026-01-17 01:10:15.485205 | orchestrator | ok: [testbed-node-0] 2026-01-17 01:10:15.485211 | orchestrator | ok: [testbed-node-1] 2026-01-17 01:10:15.485217 | orchestrator | ok: [testbed-node-2] 2026-01-17 01:10:15.485222 | orchestrator | 2026-01-17 01:10:15.485228 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-17 01:10:15.485242 | orchestrator | Saturday 17 January 2026 01:07:21 +0000 (0:00:00.377) 0:00:00.636 ****** 2026-01-17 01:10:15.485246 | orchestrator | ok: [testbed-node-0] => (item=enable_cinder_True) 2026-01-17 01:10:15.485249 | orchestrator | ok: [testbed-node-1] => (item=enable_cinder_True) 2026-01-17 01:10:15.485253 | orchestrator | ok: [testbed-node-2] => (item=enable_cinder_True) 2026-01-17 01:10:15.485256 | orchestrator | 2026-01-17 01:10:15.485259 | orchestrator | PLAY [Apply role cinder] ******************************************************* 2026-01-17 01:10:15.485262 | orchestrator | 2026-01-17 01:10:15.485265 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2026-01-17 01:10:15.485268 | orchestrator | Saturday 17 January 2026 01:07:22 +0000 (0:00:00.539) 0:00:01.176 ****** 2026-01-17 01:10:15.485281 | orchestrator | included: /ansible/roles/cinder/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-17 01:10:15.485285 | orchestrator | 2026-01-17 01:10:15.485288 | orchestrator | TASK [service-ks-register : cinder | Creating services] ************************ 2026-01-17 01:10:15.485291 | orchestrator | Saturday 17 January 2026 01:07:23 +0000 (0:00:00.779) 0:00:01.955 ****** 2026-01-17 01:10:15.485318 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 (volumev3)) 2026-01-17 01:10:15.485322 | orchestrator | 2026-01-17 01:10:15.485325 | orchestrator | TASK [service-ks-register : cinder | Creating endpoints] *********************** 2026-01-17 01:10:15.485328 | orchestrator | Saturday 17 January 2026 01:07:27 +0000 (0:00:03.949) 0:00:05.905 ****** 2026-01-17 01:10:15.485331 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 -> https://api-int.testbed.osism.xyz:8776/v3/%(tenant_id)s -> internal) 2026-01-17 01:10:15.485334 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 -> https://api.testbed.osism.xyz:8776/v3/%(tenant_id)s -> public) 2026-01-17 01:10:15.485338 | orchestrator | 2026-01-17 01:10:15.485341 | orchestrator | TASK [service-ks-register : cinder | Creating projects] ************************ 2026-01-17 01:10:15.485344 | orchestrator | Saturday 17 January 2026 01:07:33 +0000 (0:00:06.202) 0:00:12.107 ****** 2026-01-17 01:10:15.485347 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-01-17 01:10:15.485350 | orchestrator | 2026-01-17 01:10:15.485353 | orchestrator | TASK [service-ks-register : cinder | Creating users] *************************** 2026-01-17 01:10:15.485360 | orchestrator | Saturday 17 January 2026 01:07:36 +0000 (0:00:03.249) 0:00:15.357 ****** 2026-01-17 01:10:15.485365 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-01-17 01:10:15.485371 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service) 2026-01-17 01:10:15.485376 | orchestrator | 2026-01-17 01:10:15.485381 | orchestrator | TASK [service-ks-register : cinder | Creating roles] *************************** 2026-01-17 01:10:15.485385 | orchestrator | Saturday 17 January 2026 01:07:39 +0000 (0:00:03.308) 0:00:18.665 ****** 2026-01-17 01:10:15.485391 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-01-17 01:10:15.485397 | orchestrator | 2026-01-17 01:10:15.485402 | orchestrator | TASK [service-ks-register : cinder | Granting user roles] ********************** 2026-01-17 01:10:15.485408 | orchestrator | Saturday 17 January 2026 01:07:43 +0000 (0:00:03.436) 0:00:22.102 ****** 2026-01-17 01:10:15.485411 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service -> admin) 2026-01-17 01:10:15.485414 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service -> service) 2026-01-17 01:10:15.485417 | orchestrator | 2026-01-17 01:10:15.485420 | orchestrator | TASK [cinder : Ensuring config directories exist] ****************************** 2026-01-17 01:10:15.485424 | orchestrator | Saturday 17 January 2026 01:07:50 +0000 (0:00:06.913) 0:00:29.015 ****** 2026-01-17 01:10:15.485428 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-01-17 01:10:15.485447 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-01-17 01:10:15.485455 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-01-17 01:10:15.485459 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-01-17 01:10:15.485464 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-01-17 01:10:15.485470 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-01-17 01:10:15.485478 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-01-17 01:10:15.485497 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-01-17 01:10:15.485504 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-01-17 01:10:15.485509 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-01-17 01:10:15.485516 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-01-17 01:10:15.485521 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-01-17 01:10:15.485527 | orchestrator | 2026-01-17 01:10:15.485538 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2026-01-17 01:10:15.485548 | orchestrator | Saturday 17 January 2026 01:07:53 +0000 (0:00:02.845) 0:00:31.861 ****** 2026-01-17 01:10:15.485553 | orchestrator | skipping: [testbed-node-0] 2026-01-17 01:10:15.485558 | orchestrator | skipping: [testbed-node-1] 2026-01-17 01:10:15.485567 | orchestrator | skipping: [testbed-node-2] 2026-01-17 01:10:15.485572 | orchestrator | 2026-01-17 01:10:15.485577 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2026-01-17 01:10:15.485582 | orchestrator | Saturday 17 January 2026 01:07:53 +0000 (0:00:00.439) 0:00:32.301 ****** 2026-01-17 01:10:15.485588 | orchestrator | included: /ansible/roles/cinder/tasks/external_ceph.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-17 01:10:15.485591 | orchestrator | 2026-01-17 01:10:15.485595 | orchestrator | TASK [cinder : Ensuring cinder service ceph config subdirs exists] ************* 2026-01-17 01:10:15.485599 | orchestrator | Saturday 17 January 2026 01:07:55 +0000 (0:00:01.925) 0:00:34.226 ****** 2026-01-17 01:10:15.485628 | orchestrator | changed: [testbed-node-0] => (item=cinder-volume) 2026-01-17 01:10:15.485637 | orchestrator | changed: [testbed-node-1] => (item=cinder-volume) 2026-01-17 01:10:15.485640 | orchestrator | changed: [testbed-node-2] => (item=cinder-volume) 2026-01-17 01:10:15.485643 | orchestrator | changed: [testbed-node-0] => (item=cinder-backup) 2026-01-17 01:10:15.485646 | orchestrator | changed: [testbed-node-1] => (item=cinder-backup) 2026-01-17 01:10:15.485650 | orchestrator | changed: [testbed-node-2] => (item=cinder-backup) 2026-01-17 01:10:15.485653 | orchestrator | 2026-01-17 01:10:15.485656 | orchestrator | TASK [cinder : Copying over multiple ceph.conf for cinder services] ************ 2026-01-17 01:10:15.485659 | orchestrator | Saturday 17 January 2026 01:07:59 +0000 (0:00:04.058) 0:00:38.285 ****** 2026-01-17 01:10:15.485666 | orchestrator | skipping: [testbed-node-0] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2026-01-17 01:10:15.485669 | orchestrator | skipping: [testbed-node-0] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2026-01-17 01:10:15.485673 | orchestrator | skipping: [testbed-node-2] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2026-01-17 01:10:15.485681 | orchestrator | skipping: [testbed-node-1] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2026-01-17 01:10:15.485689 | orchestrator | skipping: [testbed-node-2] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2026-01-17 01:10:15.485693 | orchestrator | skipping: [testbed-node-1] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2026-01-17 01:10:15.485697 | orchestrator | changed: [testbed-node-2] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2026-01-17 01:10:15.485701 | orchestrator | changed: [testbed-node-1] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2026-01-17 01:10:15.485704 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2026-01-17 01:10:15.485711 | orchestrator | changed: [testbed-node-2] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2026-01-17 01:10:15.485718 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2026-01-17 01:10:15.485723 | orchestrator | changed: [testbed-node-1] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2026-01-17 01:10:15.485728 | orchestrator | 2026-01-17 01:10:15.485735 | orchestrator | TASK [cinder : Copy over Ceph keyring files for cinder-volume] ***************** 2026-01-17 01:10:15.485742 | orchestrator | Saturday 17 January 2026 01:08:03 +0000 (0:00:04.327) 0:00:42.613 ****** 2026-01-17 01:10:15.485747 | orchestrator | changed: [testbed-node-0] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2026-01-17 01:10:15.485753 | orchestrator | changed: [testbed-node-1] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2026-01-17 01:10:15.485758 | orchestrator | changed: [testbed-node-2] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2026-01-17 01:10:15.485763 | orchestrator | 2026-01-17 01:10:15.485769 | orchestrator | TASK [cinder : Copy over Ceph keyring files for cinder-backup] ***************** 2026-01-17 01:10:15.485774 | orchestrator | Saturday 17 January 2026 01:08:06 +0000 (0:00:02.280) 0:00:44.893 ****** 2026-01-17 01:10:15.485780 | orchestrator | changed: [testbed-node-1] => (item=ceph.client.cinder.keyring) 2026-01-17 01:10:15.485785 | orchestrator | changed: [testbed-node-2] => (item=ceph.client.cinder.keyring) 2026-01-17 01:10:15.485794 | orchestrator | changed: [testbed-node-0] => (item=ceph.client.cinder.keyring) 2026-01-17 01:10:15.485800 | orchestrator | changed: [testbed-node-1] => (item=ceph.client.cinder-backup.keyring) 2026-01-17 01:10:15.485804 | orchestrator | changed: [testbed-node-0] => (item=ceph.client.cinder-backup.keyring) 2026-01-17 01:10:15.485810 | orchestrator | changed: [testbed-node-2] => (item=ceph.client.cinder-backup.keyring) 2026-01-17 01:10:15.485815 | orchestrator | 2026-01-17 01:10:15.485820 | orchestrator | TASK [cinder : Ensuring config directory has correct owner and permission] ***** 2026-01-17 01:10:15.485823 | orchestrator | Saturday 17 January 2026 01:08:08 +0000 (0:00:02.796) 0:00:47.690 ****** 2026-01-17 01:10:15.485826 | orchestrator | ok: [testbed-node-0] => (item=cinder-volume) 2026-01-17 01:10:15.485830 | orchestrator | ok: [testbed-node-1] => (item=cinder-volume) 2026-01-17 01:10:15.485863 | orchestrator | ok: [testbed-node-2] => (item=cinder-volume) 2026-01-17 01:10:15.485866 | orchestrator | ok: [testbed-node-0] => (item=cinder-backup) 2026-01-17 01:10:15.485869 | orchestrator | ok: [testbed-node-1] => (item=cinder-backup) 2026-01-17 01:10:15.485872 | orchestrator | ok: [testbed-node-2] => (item=cinder-backup) 2026-01-17 01:10:15.485875 | orchestrator | 2026-01-17 01:10:15.485878 | orchestrator | TASK [cinder : Check if policies shall be overwritten] ************************* 2026-01-17 01:10:15.485882 | orchestrator | Saturday 17 January 2026 01:08:10 +0000 (0:00:01.430) 0:00:49.120 ****** 2026-01-17 01:10:15.485885 | orchestrator | skipping: [testbed-node-0] 2026-01-17 01:10:15.485888 | orchestrator | 2026-01-17 01:10:15.485891 | orchestrator | TASK [cinder : Set cinder policy file] ***************************************** 2026-01-17 01:10:15.485894 | orchestrator | Saturday 17 January 2026 01:08:10 +0000 (0:00:00.206) 0:00:49.326 ****** 2026-01-17 01:10:15.485897 | orchestrator | skipping: [testbed-node-0] 2026-01-17 01:10:15.485900 | orchestrator | skipping: [testbed-node-1] 2026-01-17 01:10:15.485903 | orchestrator | skipping: [testbed-node-2] 2026-01-17 01:10:15.485906 | orchestrator | 2026-01-17 01:10:15.485909 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2026-01-17 01:10:15.485912 | orchestrator | Saturday 17 January 2026 01:08:10 +0000 (0:00:00.362) 0:00:49.689 ****** 2026-01-17 01:10:15.485916 | orchestrator | included: /ansible/roles/cinder/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-17 01:10:15.485919 | orchestrator | 2026-01-17 01:10:15.485922 | orchestrator | TASK [service-cert-copy : cinder | Copying over extra CA certificates] ********* 2026-01-17 01:10:15.485928 | orchestrator | Saturday 17 January 2026 01:08:11 +0000 (0:00:00.901) 0:00:50.591 ****** 2026-01-17 01:10:15.485934 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-01-17 01:10:15.485938 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-01-17 01:10:15.485947 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-01-17 01:10:15.485950 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-01-17 01:10:15.485954 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-01-17 01:10:15.485960 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-01-17 01:10:15.485966 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-01-17 01:10:15.485969 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-01-17 01:10:15.485975 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-01-17 01:10:15.485978 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-01-17 01:10:15.485982 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-01-17 01:10:15.485989 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-01-17 01:10:15.485993 | orchestrator | 2026-01-17 01:10:15.485996 | orchestrator | TASK [service-cert-copy : cinder | Copying over backend internal TLS certificate] *** 2026-01-17 01:10:15.485999 | orchestrator | Saturday 17 January 2026 01:08:15 +0000 (0:00:04.067) 0:00:54.658 ****** 2026-01-17 01:10:15.486003 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-01-17 01:10:15.486008 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-01-17 01:10:15.486035 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-01-17 01:10:15.486039 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-01-17 01:10:15.486043 | orchestrator | skipping: [testbed-node-0] 2026-01-17 01:10:15.486049 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-01-17 01:10:15.486055 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-01-17 01:10:15.486062 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-01-17 01:10:15.486071 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-01-17 01:10:15.486074 | orchestrator | skipping: [testbed-node-1] 2026-01-17 01:10:15.486077 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-01-17 01:10:15.486081 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-01-17 01:10:15.486087 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-01-17 01:10:15.486091 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-01-17 01:10:15.486097 | orchestrator | skipping: [testbed-node-2] 2026-01-17 01:10:15.486100 | orchestrator | 2026-01-17 01:10:15.486104 | orchestrator | TASK [service-cert-copy : cinder | Copying over backend internal TLS key] ****** 2026-01-17 01:10:15.486107 | orchestrator | Saturday 17 January 2026 01:08:16 +0000 (0:00:00.900) 0:00:55.559 ****** 2026-01-17 01:10:15.486110 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-01-17 01:10:15.486113 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-01-17 01:10:15.486117 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-01-17 01:10:15.486123 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-01-17 01:10:15.486126 | orchestrator | skipping: [testbed-node-0] 2026-01-17 01:10:15.486131 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-01-17 01:10:15.486137 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-01-17 01:10:15.486140 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-01-17 01:10:15.486146 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-01-17 01:10:15.486152 | orchestrator | skipping: [testbed-node-1] 2026-01-17 01:10:15.486160 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-01-17 01:10:15.486171 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-01-17 01:10:15.486181 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-01-17 01:10:15.486186 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-01-17 01:10:15.486191 | orchestrator | skipping: [testbed-node-2] 2026-01-17 01:10:15.486195 | orchestrator | 2026-01-17 01:10:15.486204 | orchestrator | TASK [cinder : Copying over config.json files for services] ******************** 2026-01-17 01:10:15.486210 | orchestrator | Saturday 17 January 2026 01:08:18 +0000 (0:00:01.636) 0:00:57.195 ****** 2026-01-17 01:10:15.486215 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-01-17 01:10:15.486221 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-01-17 01:10:15.486232 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-01-17 01:10:15.486243 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-01-17 01:10:15.486246 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-01-17 01:10:15.486250 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-01-17 01:10:15.486253 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-01-17 01:10:15.486256 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-01-17 01:10:15.486267 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-01-17 01:10:15.486270 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-01-17 01:10:15.486274 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-01-17 01:10:15.486277 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-01-17 01:10:15.486280 | orchestrator | 2026-01-17 01:10:15.486283 | orchestrator | TASK [cinder : Copying over cinder-wsgi.conf] ********************************** 2026-01-17 01:10:15.486286 | orchestrator | Saturday 17 January 2026 01:08:23 +0000 (0:00:05.382) 0:01:02.577 ****** 2026-01-17 01:10:15.486289 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2026-01-17 01:10:15.486293 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2026-01-17 01:10:15.486296 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2026-01-17 01:10:15.486299 | orchestrator | 2026-01-17 01:10:15.486302 | orchestrator | TASK [cinder : Copying over cinder.conf] *************************************** 2026-01-17 01:10:15.486305 | orchestrator | Saturday 17 January 2026 01:08:25 +0000 (0:00:01.792) 0:01:04.370 ****** 2026-01-17 01:10:15.486310 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-01-17 01:10:15.486317 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-01-17 01:10:15.486321 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-01-17 01:10:15.486324 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-01-17 01:10:15.486327 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-01-17 01:10:15.486331 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-01-17 01:10:15.486338 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-01-17 01:10:15.486343 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-01-17 01:10:15.486347 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-01-17 01:10:15.486350 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-01-17 01:10:15.486354 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-01-17 01:10:15.486359 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-01-17 01:10:15.486369 | orchestrator | 2026-01-17 01:10:15.486374 | orchestrator | TASK [cinder : Generating 'hostnqn' file for cinder_volume] ******************** 2026-01-17 01:10:15.486380 | orchestrator | Saturday 17 January 2026 01:08:41 +0000 (0:00:15.623) 0:01:19.993 ****** 2026-01-17 01:10:15.486384 | orchestrator | changed: [testbed-node-0] 2026-01-17 01:10:15.486390 | orchestrator | changed: [testbed-node-2] 2026-01-17 01:10:15.486395 | orchestrator | changed: [testbed-node-1] 2026-01-17 01:10:15.486400 | orchestrator | 2026-01-17 01:10:15.486406 | orchestrator | TASK [cinder : Copying over existing policy file] ****************************** 2026-01-17 01:10:15.486414 | orchestrator | Saturday 17 January 2026 01:08:42 +0000 (0:00:01.705) 0:01:21.699 ****** 2026-01-17 01:10:15.486422 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-01-17 01:10:15.486428 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-01-17 01:10:15.486434 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-01-17 01:10:15.486437 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-01-17 01:10:15.486445 | orchestrator | skipping: [testbed-node-0] 2026-01-17 01:10:15.486451 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-01-17 01:10:15.486462 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-01-17 01:10:15.486465 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-01-17 01:10:15.486469 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-01-17 01:10:15.486472 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-01-17 01:10:15.486475 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-01-17 01:10:15.486481 | orchestrator | skipping: [testbed-node-2] 2026-01-17 01:10:15.486484 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-01-17 01:10:15.486491 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-01-17 01:10:15.486494 | orchestrator | skipping: [testbed-node-1] 2026-01-17 01:10:15.486498 | orchestrator | 2026-01-17 01:10:15.486501 | orchestrator | TASK [cinder : Copying over nfs_shares files for cinder_volume] **************** 2026-01-17 01:10:15.486504 | orchestrator | Saturday 17 January 2026 01:08:43 +0000 (0:00:00.823) 0:01:22.522 ****** 2026-01-17 01:10:15.486507 | orchestrator | skipping: [testbed-node-0] 2026-01-17 01:10:15.486510 | orchestrator | skipping: [testbed-node-1] 2026-01-17 01:10:15.486514 | orchestrator | skipping: [testbed-node-2] 2026-01-17 01:10:15.486517 | orchestrator | 2026-01-17 01:10:15.486520 | orchestrator | TASK [cinder : Check cinder containers] **************************************** 2026-01-17 01:10:15.486523 | orchestrator | Saturday 17 January 2026 01:08:44 +0000 (0:00:00.354) 0:01:22.877 ****** 2026-01-17 01:10:15.486526 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-01-17 01:10:15.486529 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-01-17 01:10:15.486535 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-01-17 01:10:15.486541 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-01-17 01:10:15.486548 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-01-17 01:10:15.486554 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-01-17 01:10:15.486559 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-01-17 01:10:15.486568 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-01-17 01:10:15.486573 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-01-17 01:10:15.486579 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-01-17 01:10:15.486596 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-01-17 01:10:15.486602 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-01-17 01:10:15.486607 | orchestrator | 2026-01-17 01:10:15.486613 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2026-01-17 01:10:15.486618 | orchestrator | Saturday 17 January 2026 01:08:47 +0000 (0:00:03.077) 0:01:25.955 ****** 2026-01-17 01:10:15.486622 | orchestrator | skipping: [testbed-node-0] 2026-01-17 01:10:15.486627 | orchestrator | skipping: [testbed-node-1] 2026-01-17 01:10:15.486632 | orchestrator | skipping: [testbed-node-2] 2026-01-17 01:10:15.486640 | orchestrator | 2026-01-17 01:10:15.486644 | orchestrator | TASK [cinder : Creating Cinder database] *************************************** 2026-01-17 01:10:15.486649 | orchestrator | Saturday 17 January 2026 01:08:47 +0000 (0:00:00.514) 0:01:26.469 ****** 2026-01-17 01:10:15.486653 | orchestrator | changed: [testbed-node-0] 2026-01-17 01:10:15.486658 | orchestrator | 2026-01-17 01:10:15.486663 | orchestrator | TASK [cinder : Creating Cinder database user and setting permissions] ********** 2026-01-17 01:10:15.486669 | orchestrator | Saturday 17 January 2026 01:08:49 +0000 (0:00:02.051) 0:01:28.521 ****** 2026-01-17 01:10:15.486674 | orchestrator | changed: [testbed-node-0] 2026-01-17 01:10:15.486680 | orchestrator | 2026-01-17 01:10:15.486685 | orchestrator | TASK [cinder : Running Cinder bootstrap container] ***************************** 2026-01-17 01:10:15.486691 | orchestrator | Saturday 17 January 2026 01:08:52 +0000 (0:00:02.289) 0:01:30.810 ****** 2026-01-17 01:10:15.486696 | orchestrator | changed: [testbed-node-0] 2026-01-17 01:10:15.486701 | orchestrator | 2026-01-17 01:10:15.486707 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2026-01-17 01:10:15.486710 | orchestrator | Saturday 17 January 2026 01:09:10 +0000 (0:00:18.391) 0:01:49.201 ****** 2026-01-17 01:10:15.486714 | orchestrator | 2026-01-17 01:10:15.486717 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2026-01-17 01:10:15.486720 | orchestrator | Saturday 17 January 2026 01:09:10 +0000 (0:00:00.066) 0:01:49.268 ****** 2026-01-17 01:10:15.486723 | orchestrator | 2026-01-17 01:10:15.486726 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2026-01-17 01:10:15.486729 | orchestrator | Saturday 17 January 2026 01:09:10 +0000 (0:00:00.066) 0:01:49.334 ****** 2026-01-17 01:10:15.486732 | orchestrator | 2026-01-17 01:10:15.486736 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-api container] ************************ 2026-01-17 01:10:15.486739 | orchestrator | Saturday 17 January 2026 01:09:10 +0000 (0:00:00.068) 0:01:49.403 ****** 2026-01-17 01:10:15.486742 | orchestrator | changed: [testbed-node-0] 2026-01-17 01:10:15.486745 | orchestrator | changed: [testbed-node-2] 2026-01-17 01:10:15.486748 | orchestrator | changed: [testbed-node-1] 2026-01-17 01:10:15.486751 | orchestrator | 2026-01-17 01:10:15.486754 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-scheduler container] ****************** 2026-01-17 01:10:15.486757 | orchestrator | Saturday 17 January 2026 01:09:35 +0000 (0:00:24.367) 0:02:13.771 ****** 2026-01-17 01:10:15.486760 | orchestrator | changed: [testbed-node-0] 2026-01-17 01:10:15.486763 | orchestrator | changed: [testbed-node-1] 2026-01-17 01:10:15.486767 | orchestrator | changed: [testbed-node-2] 2026-01-17 01:10:15.486770 | orchestrator | 2026-01-17 01:10:15.486773 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-volume container] ********************* 2026-01-17 01:10:15.486776 | orchestrator | Saturday 17 January 2026 01:09:40 +0000 (0:00:05.601) 0:02:19.372 ****** 2026-01-17 01:10:15.486779 | orchestrator | changed: [testbed-node-0] 2026-01-17 01:10:15.486782 | orchestrator | changed: [testbed-node-2] 2026-01-17 01:10:15.486785 | orchestrator | changed: [testbed-node-1] 2026-01-17 01:10:15.486788 | orchestrator | 2026-01-17 01:10:15.486791 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-backup container] ********************* 2026-01-17 01:10:15.486794 | orchestrator | Saturday 17 January 2026 01:10:02 +0000 (0:00:22.143) 0:02:41.515 ****** 2026-01-17 01:10:15.486797 | orchestrator | changed: [testbed-node-0] 2026-01-17 01:10:15.486800 | orchestrator | changed: [testbed-node-2] 2026-01-17 01:10:15.486803 | orchestrator | changed: [testbed-node-1] 2026-01-17 01:10:15.486806 | orchestrator | 2026-01-17 01:10:15.486809 | orchestrator | RUNNING HANDLER [cinder : Wait for cinder services to update service versions] *** 2026-01-17 01:10:15.486816 | orchestrator | Saturday 17 January 2026 01:10:13 +0000 (0:00:10.654) 0:02:52.170 ****** 2026-01-17 01:10:15.486819 | orchestrator | skipping: [testbed-node-0] 2026-01-17 01:10:15.486823 | orchestrator | 2026-01-17 01:10:15.486826 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-17 01:10:15.486830 | orchestrator | testbed-node-0 : ok=30  changed=22  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-01-17 01:10:15.486850 | orchestrator | testbed-node-1 : ok=21  changed=15  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-01-17 01:10:15.486858 | orchestrator | testbed-node-2 : ok=21  changed=15  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-01-17 01:10:15.486864 | orchestrator | 2026-01-17 01:10:15.486869 | orchestrator | 2026-01-17 01:10:15.486874 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-17 01:10:15.486879 | orchestrator | Saturday 17 January 2026 01:10:13 +0000 (0:00:00.253) 0:02:52.424 ****** 2026-01-17 01:10:15.486885 | orchestrator | =============================================================================== 2026-01-17 01:10:15.486890 | orchestrator | cinder : Restart cinder-api container ---------------------------------- 24.37s 2026-01-17 01:10:15.486895 | orchestrator | cinder : Restart cinder-volume container ------------------------------- 22.14s 2026-01-17 01:10:15.486901 | orchestrator | cinder : Running Cinder bootstrap container ---------------------------- 18.39s 2026-01-17 01:10:15.486906 | orchestrator | cinder : Copying over cinder.conf -------------------------------------- 15.62s 2026-01-17 01:10:15.486912 | orchestrator | cinder : Restart cinder-backup container ------------------------------- 10.65s 2026-01-17 01:10:15.486917 | orchestrator | service-ks-register : cinder | Granting user roles ---------------------- 6.91s 2026-01-17 01:10:15.486922 | orchestrator | service-ks-register : cinder | Creating endpoints ----------------------- 6.20s 2026-01-17 01:10:15.486927 | orchestrator | cinder : Restart cinder-scheduler container ----------------------------- 5.60s 2026-01-17 01:10:15.486932 | orchestrator | cinder : Copying over config.json files for services -------------------- 5.38s 2026-01-17 01:10:15.486940 | orchestrator | cinder : Copying over multiple ceph.conf for cinder services ------------ 4.33s 2026-01-17 01:10:15.486946 | orchestrator | service-cert-copy : cinder | Copying over extra CA certificates --------- 4.07s 2026-01-17 01:10:15.486951 | orchestrator | cinder : Ensuring cinder service ceph config subdirs exists ------------- 4.06s 2026-01-17 01:10:15.486956 | orchestrator | service-ks-register : cinder | Creating services ------------------------ 3.95s 2026-01-17 01:10:15.486961 | orchestrator | service-ks-register : cinder | Creating roles --------------------------- 3.44s 2026-01-17 01:10:15.486966 | orchestrator | service-ks-register : cinder | Creating users --------------------------- 3.31s 2026-01-17 01:10:15.486971 | orchestrator | service-ks-register : cinder | Creating projects ------------------------ 3.25s 2026-01-17 01:10:15.486976 | orchestrator | cinder : Check cinder containers ---------------------------------------- 3.08s 2026-01-17 01:10:15.486981 | orchestrator | cinder : Ensuring config directories exist ------------------------------ 2.85s 2026-01-17 01:10:15.486985 | orchestrator | cinder : Copy over Ceph keyring files for cinder-backup ----------------- 2.80s 2026-01-17 01:10:15.486995 | orchestrator | cinder : Creating Cinder database user and setting permissions ---------- 2.29s 2026-01-17 01:10:15.487003 | orchestrator | 2026-01-17 01:10:15 | INFO  | Task 581d3af9-7e47-4e2b-9cab-6df33ae22e4c is in state STARTED 2026-01-17 01:10:15.487009 | orchestrator | 2026-01-17 01:10:15 | INFO  | Wait 1 second(s) until the next check 2026-01-17 01:10:18.531368 | orchestrator | 2026-01-17 01:10:18 | INFO  | Task ea7066e0-613e-4196-ad82-e4a4fdb9c336 is in state STARTED 2026-01-17 01:10:18.533202 | orchestrator | 2026-01-17 01:10:18 | INFO  | Task 8d62683c-dbd0-429d-9062-afd0fb05298d is in state STARTED 2026-01-17 01:10:18.534535 | orchestrator | 2026-01-17 01:10:18 | INFO  | Task 581d3af9-7e47-4e2b-9cab-6df33ae22e4c is in state STARTED 2026-01-17 01:10:18.534568 | orchestrator | 2026-01-17 01:10:18 | INFO  | Wait 1 second(s) until the next check 2026-01-17 01:10:21.579712 | orchestrator | 2026-01-17 01:10:21 | INFO  | Task ea7066e0-613e-4196-ad82-e4a4fdb9c336 is in state STARTED 2026-01-17 01:10:21.582623 | orchestrator | 2026-01-17 01:10:21 | INFO  | Task 8d62683c-dbd0-429d-9062-afd0fb05298d is in state STARTED 2026-01-17 01:10:21.585517 | orchestrator | 2026-01-17 01:10:21 | INFO  | Task 581d3af9-7e47-4e2b-9cab-6df33ae22e4c is in state STARTED 2026-01-17 01:10:21.585619 | orchestrator | 2026-01-17 01:10:21 | INFO  | Wait 1 second(s) until the next check 2026-01-17 01:10:24.636556 | orchestrator | 2026-01-17 01:10:24 | INFO  | Task ea7066e0-613e-4196-ad82-e4a4fdb9c336 is in state STARTED 2026-01-17 01:10:24.638848 | orchestrator | 2026-01-17 01:10:24 | INFO  | Task 8d62683c-dbd0-429d-9062-afd0fb05298d is in state STARTED 2026-01-17 01:10:24.642484 | orchestrator | 2026-01-17 01:10:24 | INFO  | Task 581d3af9-7e47-4e2b-9cab-6df33ae22e4c is in state STARTED 2026-01-17 01:10:24.642531 | orchestrator | 2026-01-17 01:10:24 | INFO  | Wait 1 second(s) until the next check 2026-01-17 01:10:27.694514 | orchestrator | 2026-01-17 01:10:27 | INFO  | Task ea7066e0-613e-4196-ad82-e4a4fdb9c336 is in state STARTED 2026-01-17 01:10:27.696526 | orchestrator | 2026-01-17 01:10:27 | INFO  | Task 8d62683c-dbd0-429d-9062-afd0fb05298d is in state STARTED 2026-01-17 01:10:27.699340 | orchestrator | 2026-01-17 01:10:27 | INFO  | Task 581d3af9-7e47-4e2b-9cab-6df33ae22e4c is in state STARTED 2026-01-17 01:10:27.699477 | orchestrator | 2026-01-17 01:10:27 | INFO  | Wait 1 second(s) until the next check 2026-01-17 01:10:30.745311 | orchestrator | 2026-01-17 01:10:30 | INFO  | Task ea7066e0-613e-4196-ad82-e4a4fdb9c336 is in state STARTED 2026-01-17 01:10:30.747170 | orchestrator | 2026-01-17 01:10:30 | INFO  | Task 8d62683c-dbd0-429d-9062-afd0fb05298d is in state STARTED 2026-01-17 01:10:30.749267 | orchestrator | 2026-01-17 01:10:30 | INFO  | Task 581d3af9-7e47-4e2b-9cab-6df33ae22e4c is in state STARTED 2026-01-17 01:10:30.749344 | orchestrator | 2026-01-17 01:10:30 | INFO  | Wait 1 second(s) until the next check 2026-01-17 01:10:33.794673 | orchestrator | 2026-01-17 01:10:33 | INFO  | Task ea7066e0-613e-4196-ad82-e4a4fdb9c336 is in state STARTED 2026-01-17 01:10:33.795982 | orchestrator | 2026-01-17 01:10:33 | INFO  | Task 8d62683c-dbd0-429d-9062-afd0fb05298d is in state STARTED 2026-01-17 01:10:33.797321 | orchestrator | 2026-01-17 01:10:33 | INFO  | Task 581d3af9-7e47-4e2b-9cab-6df33ae22e4c is in state STARTED 2026-01-17 01:10:33.797361 | orchestrator | 2026-01-17 01:10:33 | INFO  | Wait 1 second(s) until the next check 2026-01-17 01:10:36.838045 | orchestrator | 2026-01-17 01:10:36 | INFO  | Task ea7066e0-613e-4196-ad82-e4a4fdb9c336 is in state STARTED 2026-01-17 01:10:36.839339 | orchestrator | 2026-01-17 01:10:36 | INFO  | Task 8d62683c-dbd0-429d-9062-afd0fb05298d is in state STARTED 2026-01-17 01:10:36.841274 | orchestrator | 2026-01-17 01:10:36 | INFO  | Task 581d3af9-7e47-4e2b-9cab-6df33ae22e4c is in state STARTED 2026-01-17 01:10:36.841308 | orchestrator | 2026-01-17 01:10:36 | INFO  | Wait 1 second(s) until the next check 2026-01-17 01:10:39.885670 | orchestrator | 2026-01-17 01:10:39 | INFO  | Task ea7066e0-613e-4196-ad82-e4a4fdb9c336 is in state STARTED 2026-01-17 01:10:39.888023 | orchestrator | 2026-01-17 01:10:39 | INFO  | Task 8d62683c-dbd0-429d-9062-afd0fb05298d is in state STARTED 2026-01-17 01:10:39.891131 | orchestrator | 2026-01-17 01:10:39 | INFO  | Task 581d3af9-7e47-4e2b-9cab-6df33ae22e4c is in state STARTED 2026-01-17 01:10:39.891182 | orchestrator | 2026-01-17 01:10:39 | INFO  | Wait 1 second(s) until the next check 2026-01-17 01:10:42.936267 | orchestrator | 2026-01-17 01:10:42 | INFO  | Task ea7066e0-613e-4196-ad82-e4a4fdb9c336 is in state STARTED 2026-01-17 01:10:42.938266 | orchestrator | 2026-01-17 01:10:42 | INFO  | Task 8d62683c-dbd0-429d-9062-afd0fb05298d is in state STARTED 2026-01-17 01:10:42.940729 | orchestrator | 2026-01-17 01:10:42 | INFO  | Task 581d3af9-7e47-4e2b-9cab-6df33ae22e4c is in state STARTED 2026-01-17 01:10:42.940882 | orchestrator | 2026-01-17 01:10:42 | INFO  | Wait 1 second(s) until the next check 2026-01-17 01:10:45.983436 | orchestrator | 2026-01-17 01:10:45 | INFO  | Task ea7066e0-613e-4196-ad82-e4a4fdb9c336 is in state STARTED 2026-01-17 01:10:45.984346 | orchestrator | 2026-01-17 01:10:45 | INFO  | Task 8d62683c-dbd0-429d-9062-afd0fb05298d is in state STARTED 2026-01-17 01:10:45.985802 | orchestrator | 2026-01-17 01:10:45 | INFO  | Task 581d3af9-7e47-4e2b-9cab-6df33ae22e4c is in state STARTED 2026-01-17 01:10:45.985831 | orchestrator | 2026-01-17 01:10:45 | INFO  | Wait 1 second(s) until the next check 2026-01-17 01:10:49.040479 | orchestrator | 2026-01-17 01:10:49 | INFO  | Task ea7066e0-613e-4196-ad82-e4a4fdb9c336 is in state STARTED 2026-01-17 01:10:49.041085 | orchestrator | 2026-01-17 01:10:49 | INFO  | Task 8d62683c-dbd0-429d-9062-afd0fb05298d is in state STARTED 2026-01-17 01:10:49.043193 | orchestrator | 2026-01-17 01:10:49 | INFO  | Task 581d3af9-7e47-4e2b-9cab-6df33ae22e4c is in state STARTED 2026-01-17 01:10:49.043233 | orchestrator | 2026-01-17 01:10:49 | INFO  | Wait 1 second(s) until the next check 2026-01-17 01:10:52.091746 | orchestrator | 2026-01-17 01:10:52 | INFO  | Task ea7066e0-613e-4196-ad82-e4a4fdb9c336 is in state STARTED 2026-01-17 01:10:52.093635 | orchestrator | 2026-01-17 01:10:52 | INFO  | Task 8d62683c-dbd0-429d-9062-afd0fb05298d is in state STARTED 2026-01-17 01:10:52.095216 | orchestrator | 2026-01-17 01:10:52 | INFO  | Task 581d3af9-7e47-4e2b-9cab-6df33ae22e4c is in state STARTED 2026-01-17 01:10:52.095260 | orchestrator | 2026-01-17 01:10:52 | INFO  | Wait 1 second(s) until the next check 2026-01-17 01:10:55.136378 | orchestrator | 2026-01-17 01:10:55 | INFO  | Task ea7066e0-613e-4196-ad82-e4a4fdb9c336 is in state STARTED 2026-01-17 01:10:55.139531 | orchestrator | 2026-01-17 01:10:55 | INFO  | Task 8d62683c-dbd0-429d-9062-afd0fb05298d is in state STARTED 2026-01-17 01:10:55.142117 | orchestrator | 2026-01-17 01:10:55 | INFO  | Task 581d3af9-7e47-4e2b-9cab-6df33ae22e4c is in state STARTED 2026-01-17 01:10:55.142163 | orchestrator | 2026-01-17 01:10:55 | INFO  | Wait 1 second(s) until the next check 2026-01-17 01:10:58.183138 | orchestrator | 2026-01-17 01:10:58 | INFO  | Task ea7066e0-613e-4196-ad82-e4a4fdb9c336 is in state STARTED 2026-01-17 01:10:58.183200 | orchestrator | 2026-01-17 01:10:58 | INFO  | Task 8d62683c-dbd0-429d-9062-afd0fb05298d is in state STARTED 2026-01-17 01:10:58.183910 | orchestrator | 2026-01-17 01:10:58 | INFO  | Task 581d3af9-7e47-4e2b-9cab-6df33ae22e4c is in state STARTED 2026-01-17 01:10:58.183979 | orchestrator | 2026-01-17 01:10:58 | INFO  | Wait 1 second(s) until the next check 2026-01-17 01:11:01.236186 | orchestrator | 2026-01-17 01:11:01 | INFO  | Task ea7066e0-613e-4196-ad82-e4a4fdb9c336 is in state STARTED 2026-01-17 01:11:01.237451 | orchestrator | 2026-01-17 01:11:01 | INFO  | Task 8d62683c-dbd0-429d-9062-afd0fb05298d is in state STARTED 2026-01-17 01:11:01.239120 | orchestrator | 2026-01-17 01:11:01 | INFO  | Task 581d3af9-7e47-4e2b-9cab-6df33ae22e4c is in state STARTED 2026-01-17 01:11:01.239288 | orchestrator | 2026-01-17 01:11:01 | INFO  | Wait 1 second(s) until the next check 2026-01-17 01:11:04.285286 | orchestrator | 2026-01-17 01:11:04 | INFO  | Task ea7066e0-613e-4196-ad82-e4a4fdb9c336 is in state STARTED 2026-01-17 01:11:04.287257 | orchestrator | 2026-01-17 01:11:04 | INFO  | Task 8d62683c-dbd0-429d-9062-afd0fb05298d is in state STARTED 2026-01-17 01:11:04.289163 | orchestrator | 2026-01-17 01:11:04 | INFO  | Task 581d3af9-7e47-4e2b-9cab-6df33ae22e4c is in state STARTED 2026-01-17 01:11:04.289244 | orchestrator | 2026-01-17 01:11:04 | INFO  | Wait 1 second(s) until the next check 2026-01-17 01:11:07.333636 | orchestrator | 2026-01-17 01:11:07 | INFO  | Task ea7066e0-613e-4196-ad82-e4a4fdb9c336 is in state STARTED 2026-01-17 01:11:07.335540 | orchestrator | 2026-01-17 01:11:07 | INFO  | Task 8d62683c-dbd0-429d-9062-afd0fb05298d is in state STARTED 2026-01-17 01:11:07.336832 | orchestrator | 2026-01-17 01:11:07 | INFO  | Task 581d3af9-7e47-4e2b-9cab-6df33ae22e4c is in state STARTED 2026-01-17 01:11:07.337112 | orchestrator | 2026-01-17 01:11:07 | INFO  | Wait 1 second(s) until the next check 2026-01-17 01:11:10.383283 | orchestrator | 2026-01-17 01:11:10 | INFO  | Task ea7066e0-613e-4196-ad82-e4a4fdb9c336 is in state STARTED 2026-01-17 01:11:10.384716 | orchestrator | 2026-01-17 01:11:10 | INFO  | Task 8d62683c-dbd0-429d-9062-afd0fb05298d is in state STARTED 2026-01-17 01:11:10.384799 | orchestrator | 2026-01-17 01:11:10 | INFO  | Task 581d3af9-7e47-4e2b-9cab-6df33ae22e4c is in state STARTED 2026-01-17 01:11:10.384806 | orchestrator | 2026-01-17 01:11:10 | INFO  | Wait 1 second(s) until the next check 2026-01-17 01:11:13.443352 | orchestrator | 2026-01-17 01:11:13 | INFO  | Task ea7066e0-613e-4196-ad82-e4a4fdb9c336 is in state STARTED 2026-01-17 01:11:13.445256 | orchestrator | 2026-01-17 01:11:13 | INFO  | Task 8d62683c-dbd0-429d-9062-afd0fb05298d is in state STARTED 2026-01-17 01:11:13.446691 | orchestrator | 2026-01-17 01:11:13 | INFO  | Task 581d3af9-7e47-4e2b-9cab-6df33ae22e4c is in state STARTED 2026-01-17 01:11:13.446844 | orchestrator | 2026-01-17 01:11:13 | INFO  | Wait 1 second(s) until the next check 2026-01-17 01:11:16.493954 | orchestrator | 2026-01-17 01:11:16 | INFO  | Task ea7066e0-613e-4196-ad82-e4a4fdb9c336 is in state STARTED 2026-01-17 01:11:16.496240 | orchestrator | 2026-01-17 01:11:16 | INFO  | Task 8d62683c-dbd0-429d-9062-afd0fb05298d is in state STARTED 2026-01-17 01:11:16.498344 | orchestrator | 2026-01-17 01:11:16 | INFO  | Task 581d3af9-7e47-4e2b-9cab-6df33ae22e4c is in state STARTED 2026-01-17 01:11:16.498380 | orchestrator | 2026-01-17 01:11:16 | INFO  | Wait 1 second(s) until the next check 2026-01-17 01:11:19.542854 | orchestrator | 2026-01-17 01:11:19 | INFO  | Task ea7066e0-613e-4196-ad82-e4a4fdb9c336 is in state STARTED 2026-01-17 01:11:19.545168 | orchestrator | 2026-01-17 01:11:19 | INFO  | Task 8d62683c-dbd0-429d-9062-afd0fb05298d is in state STARTED 2026-01-17 01:11:19.547125 | orchestrator | 2026-01-17 01:11:19 | INFO  | Task 581d3af9-7e47-4e2b-9cab-6df33ae22e4c is in state STARTED 2026-01-17 01:11:19.547169 | orchestrator | 2026-01-17 01:11:19 | INFO  | Wait 1 second(s) until the next check 2026-01-17 01:11:22.593214 | orchestrator | 2026-01-17 01:11:22 | INFO  | Task ea7066e0-613e-4196-ad82-e4a4fdb9c336 is in state STARTED 2026-01-17 01:11:22.595284 | orchestrator | 2026-01-17 01:11:22 | INFO  | Task 8d62683c-dbd0-429d-9062-afd0fb05298d is in state STARTED 2026-01-17 01:11:22.596735 | orchestrator | 2026-01-17 01:11:22 | INFO  | Task 581d3af9-7e47-4e2b-9cab-6df33ae22e4c is in state STARTED 2026-01-17 01:11:22.596932 | orchestrator | 2026-01-17 01:11:22 | INFO  | Wait 1 second(s) until the next check 2026-01-17 01:11:25.636583 | orchestrator | 2026-01-17 01:11:25 | INFO  | Task ea7066e0-613e-4196-ad82-e4a4fdb9c336 is in state STARTED 2026-01-17 01:11:25.639146 | orchestrator | 2026-01-17 01:11:25 | INFO  | Task 8d62683c-dbd0-429d-9062-afd0fb05298d is in state STARTED 2026-01-17 01:11:25.641377 | orchestrator | 2026-01-17 01:11:25 | INFO  | Task 581d3af9-7e47-4e2b-9cab-6df33ae22e4c is in state STARTED 2026-01-17 01:11:25.641438 | orchestrator | 2026-01-17 01:11:25 | INFO  | Wait 1 second(s) until the next check 2026-01-17 01:11:28.695383 | orchestrator | 2026-01-17 01:11:28 | INFO  | Task ea7066e0-613e-4196-ad82-e4a4fdb9c336 is in state STARTED 2026-01-17 01:11:28.698573 | orchestrator | 2026-01-17 01:11:28 | INFO  | Task 8d62683c-dbd0-429d-9062-afd0fb05298d is in state STARTED 2026-01-17 01:11:28.701383 | orchestrator | 2026-01-17 01:11:28 | INFO  | Task 581d3af9-7e47-4e2b-9cab-6df33ae22e4c is in state STARTED 2026-01-17 01:11:28.701437 | orchestrator | 2026-01-17 01:11:28 | INFO  | Wait 1 second(s) until the next check 2026-01-17 01:11:31.746571 | orchestrator | 2026-01-17 01:11:31 | INFO  | Task ea7066e0-613e-4196-ad82-e4a4fdb9c336 is in state STARTED 2026-01-17 01:11:31.748897 | orchestrator | 2026-01-17 01:11:31 | INFO  | Task 8d62683c-dbd0-429d-9062-afd0fb05298d is in state STARTED 2026-01-17 01:11:31.751364 | orchestrator | 2026-01-17 01:11:31 | INFO  | Task 581d3af9-7e47-4e2b-9cab-6df33ae22e4c is in state STARTED 2026-01-17 01:11:31.751411 | orchestrator | 2026-01-17 01:11:31 | INFO  | Wait 1 second(s) until the next check 2026-01-17 01:11:34.803026 | orchestrator | 2026-01-17 01:11:34 | INFO  | Task ea7066e0-613e-4196-ad82-e4a4fdb9c336 is in state STARTED 2026-01-17 01:11:34.803075 | orchestrator | 2026-01-17 01:11:34 | INFO  | Task 8d62683c-dbd0-429d-9062-afd0fb05298d is in state STARTED 2026-01-17 01:11:34.805058 | orchestrator | 2026-01-17 01:11:34 | INFO  | Task 581d3af9-7e47-4e2b-9cab-6df33ae22e4c is in state STARTED 2026-01-17 01:11:34.805093 | orchestrator | 2026-01-17 01:11:34 | INFO  | Wait 1 second(s) until the next check 2026-01-17 01:11:37.862291 | orchestrator | 2026-01-17 01:11:37 | INFO  | Task ea7066e0-613e-4196-ad82-e4a4fdb9c336 is in state STARTED 2026-01-17 01:11:37.864272 | orchestrator | 2026-01-17 01:11:37 | INFO  | Task 8d62683c-dbd0-429d-9062-afd0fb05298d is in state STARTED 2026-01-17 01:11:37.866259 | orchestrator | 2026-01-17 01:11:37 | INFO  | Task 581d3af9-7e47-4e2b-9cab-6df33ae22e4c is in state STARTED 2026-01-17 01:11:37.866414 | orchestrator | 2026-01-17 01:11:37 | INFO  | Wait 1 second(s) until the next check 2026-01-17 01:11:40.923198 | orchestrator | 2026-01-17 01:11:40 | INFO  | Task ea7066e0-613e-4196-ad82-e4a4fdb9c336 is in state STARTED 2026-01-17 01:11:40.926238 | orchestrator | 2026-01-17 01:11:40 | INFO  | Task 8d62683c-dbd0-429d-9062-afd0fb05298d is in state STARTED 2026-01-17 01:11:40.928724 | orchestrator | 2026-01-17 01:11:40 | INFO  | Task 581d3af9-7e47-4e2b-9cab-6df33ae22e4c is in state STARTED 2026-01-17 01:11:40.928774 | orchestrator | 2026-01-17 01:11:40 | INFO  | Wait 1 second(s) until the next check 2026-01-17 01:11:43.971455 | orchestrator | 2026-01-17 01:11:43 | INFO  | Task ea7066e0-613e-4196-ad82-e4a4fdb9c336 is in state STARTED 2026-01-17 01:11:43.971535 | orchestrator | 2026-01-17 01:11:43 | INFO  | Task 8d62683c-dbd0-429d-9062-afd0fb05298d is in state SUCCESS 2026-01-17 01:11:43.972260 | orchestrator | 2026-01-17 01:11:43 | INFO  | Task 581d3af9-7e47-4e2b-9cab-6df33ae22e4c is in state STARTED 2026-01-17 01:11:43.972300 | orchestrator | 2026-01-17 01:11:43 | INFO  | Wait 1 second(s) until the next check 2026-01-17 01:11:47.035611 | orchestrator | 2026-01-17 01:11:47 | INFO  | Task ea7066e0-613e-4196-ad82-e4a4fdb9c336 is in state STARTED 2026-01-17 01:11:47.036831 | orchestrator | 2026-01-17 01:11:47 | INFO  | Task ded2d698-a8bd-4275-a3c9-8f83c90a5e3a is in state STARTED 2026-01-17 01:11:47.038482 | orchestrator | 2026-01-17 01:11:47 | INFO  | Task 581d3af9-7e47-4e2b-9cab-6df33ae22e4c is in state STARTED 2026-01-17 01:11:47.038536 | orchestrator | 2026-01-17 01:11:47 | INFO  | Wait 1 second(s) until the next check 2026-01-17 01:11:50.109494 | orchestrator | 2026-01-17 01:11:50 | INFO  | Task ea7066e0-613e-4196-ad82-e4a4fdb9c336 is in state STARTED 2026-01-17 01:11:50.109557 | orchestrator | 2026-01-17 01:11:50 | INFO  | Task ded2d698-a8bd-4275-a3c9-8f83c90a5e3a is in state STARTED 2026-01-17 01:11:50.111734 | orchestrator | 2026-01-17 01:11:50 | INFO  | Task 581d3af9-7e47-4e2b-9cab-6df33ae22e4c is in state STARTED 2026-01-17 01:11:50.111783 | orchestrator | 2026-01-17 01:11:50 | INFO  | Wait 1 second(s) until the next check 2026-01-17 01:11:53.192350 | orchestrator | 2026-01-17 01:11:53 | INFO  | Task ea7066e0-613e-4196-ad82-e4a4fdb9c336 is in state STARTED 2026-01-17 01:11:53.197317 | orchestrator | 2026-01-17 01:11:53 | INFO  | Task ded2d698-a8bd-4275-a3c9-8f83c90a5e3a is in state STARTED 2026-01-17 01:11:53.201704 | orchestrator | 2026-01-17 01:11:53 | INFO  | Task 581d3af9-7e47-4e2b-9cab-6df33ae22e4c is in state STARTED 2026-01-17 01:11:53.201731 | orchestrator | 2026-01-17 01:11:53 | INFO  | Wait 1 second(s) until the next check 2026-01-17 01:11:56.255377 | orchestrator | 2026-01-17 01:11:56 | INFO  | Task ea7066e0-613e-4196-ad82-e4a4fdb9c336 is in state STARTED 2026-01-17 01:11:56.256170 | orchestrator | 2026-01-17 01:11:56 | INFO  | Task ded2d698-a8bd-4275-a3c9-8f83c90a5e3a is in state STARTED 2026-01-17 01:11:56.258070 | orchestrator | 2026-01-17 01:11:56 | INFO  | Task 581d3af9-7e47-4e2b-9cab-6df33ae22e4c is in state STARTED 2026-01-17 01:11:56.258126 | orchestrator | 2026-01-17 01:11:56 | INFO  | Wait 1 second(s) until the next check 2026-01-17 01:11:59.306225 | orchestrator | 2026-01-17 01:11:59 | INFO  | Task ea7066e0-613e-4196-ad82-e4a4fdb9c336 is in state STARTED 2026-01-17 01:11:59.306308 | orchestrator | 2026-01-17 01:11:59 | INFO  | Task ded2d698-a8bd-4275-a3c9-8f83c90a5e3a is in state STARTED 2026-01-17 01:11:59.306905 | orchestrator | 2026-01-17 01:11:59 | INFO  | Task 581d3af9-7e47-4e2b-9cab-6df33ae22e4c is in state STARTED 2026-01-17 01:11:59.307034 | orchestrator | 2026-01-17 01:11:59 | INFO  | Wait 1 second(s) until the next check 2026-01-17 01:12:02.345401 | orchestrator | 2026-01-17 01:12:02 | INFO  | Task ea7066e0-613e-4196-ad82-e4a4fdb9c336 is in state STARTED 2026-01-17 01:12:02.348122 | orchestrator | 2026-01-17 01:12:02 | INFO  | Task ded2d698-a8bd-4275-a3c9-8f83c90a5e3a is in state STARTED 2026-01-17 01:12:02.348670 | orchestrator | 2026-01-17 01:12:02 | INFO  | Task 581d3af9-7e47-4e2b-9cab-6df33ae22e4c is in state STARTED 2026-01-17 01:12:02.348691 | orchestrator | 2026-01-17 01:12:02 | INFO  | Wait 1 second(s) until the next check 2026-01-17 01:12:05.395565 | orchestrator | 2026-01-17 01:12:05 | INFO  | Task ea7066e0-613e-4196-ad82-e4a4fdb9c336 is in state STARTED 2026-01-17 01:12:05.396813 | orchestrator | 2026-01-17 01:12:05 | INFO  | Task ded2d698-a8bd-4275-a3c9-8f83c90a5e3a is in state STARTED 2026-01-17 01:12:05.398389 | orchestrator | 2026-01-17 01:12:05 | INFO  | Task 581d3af9-7e47-4e2b-9cab-6df33ae22e4c is in state STARTED 2026-01-17 01:12:05.398488 | orchestrator | 2026-01-17 01:12:05 | INFO  | Wait 1 second(s) until the next check 2026-01-17 01:12:08.453271 | orchestrator | 2026-01-17 01:12:08 | INFO  | Task ea7066e0-613e-4196-ad82-e4a4fdb9c336 is in state STARTED 2026-01-17 01:12:08.454667 | orchestrator | 2026-01-17 01:12:08 | INFO  | Task ded2d698-a8bd-4275-a3c9-8f83c90a5e3a is in state STARTED 2026-01-17 01:12:08.456135 | orchestrator | 2026-01-17 01:12:08 | INFO  | Task 581d3af9-7e47-4e2b-9cab-6df33ae22e4c is in state STARTED 2026-01-17 01:12:08.456173 | orchestrator | 2026-01-17 01:12:08 | INFO  | Wait 1 second(s) until the next check 2026-01-17 01:12:11.503875 | orchestrator | 2026-01-17 01:12:11 | INFO  | Task ea7066e0-613e-4196-ad82-e4a4fdb9c336 is in state STARTED 2026-01-17 01:12:11.504407 | orchestrator | 2026-01-17 01:12:11 | INFO  | Task ded2d698-a8bd-4275-a3c9-8f83c90a5e3a is in state STARTED 2026-01-17 01:12:11.505226 | orchestrator | 2026-01-17 01:12:11 | INFO  | Task 581d3af9-7e47-4e2b-9cab-6df33ae22e4c is in state STARTED 2026-01-17 01:12:11.505319 | orchestrator | 2026-01-17 01:12:11 | INFO  | Wait 1 second(s) until the next check 2026-01-17 01:12:14.547050 | orchestrator | 2026-01-17 01:12:14 | INFO  | Task ea7066e0-613e-4196-ad82-e4a4fdb9c336 is in state SUCCESS 2026-01-17 01:12:14.548004 | orchestrator | 2026-01-17 01:12:14.548050 | orchestrator | 2026-01-17 01:12:14.548068 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-17 01:12:14.548077 | orchestrator | 2026-01-17 01:12:14.548083 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-01-17 01:12:14.548090 | orchestrator | Saturday 17 January 2026 01:09:16 +0000 (0:00:00.290) 0:00:00.290 ****** 2026-01-17 01:12:14.548097 | orchestrator | ok: [testbed-node-0] 2026-01-17 01:12:14.548105 | orchestrator | ok: [testbed-node-1] 2026-01-17 01:12:14.548111 | orchestrator | ok: [testbed-node-2] 2026-01-17 01:12:14.548117 | orchestrator | 2026-01-17 01:12:14.548122 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-17 01:12:14.548129 | orchestrator | Saturday 17 January 2026 01:09:16 +0000 (0:00:00.457) 0:00:00.747 ****** 2026-01-17 01:12:14.548160 | orchestrator | ok: [testbed-node-0] => (item=enable_nova_True) 2026-01-17 01:12:14.548167 | orchestrator | ok: [testbed-node-1] => (item=enable_nova_True) 2026-01-17 01:12:14.548171 | orchestrator | ok: [testbed-node-2] => (item=enable_nova_True) 2026-01-17 01:12:14.548175 | orchestrator | 2026-01-17 01:12:14.548179 | orchestrator | PLAY [Wait for the Nova service] *********************************************** 2026-01-17 01:12:14.548183 | orchestrator | 2026-01-17 01:12:14.548202 | orchestrator | TASK [Waiting for Nova public port to be UP] *********************************** 2026-01-17 01:12:14.548206 | orchestrator | Saturday 17 January 2026 01:09:17 +0000 (0:00:00.872) 0:00:01.620 ****** 2026-01-17 01:12:14.548210 | orchestrator | ok: [testbed-node-0] 2026-01-17 01:12:14.548214 | orchestrator | ok: [testbed-node-1] 2026-01-17 01:12:14.548218 | orchestrator | ok: [testbed-node-2] 2026-01-17 01:12:14.548222 | orchestrator | 2026-01-17 01:12:14.548226 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-17 01:12:14.548231 | orchestrator | testbed-node-0 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-17 01:12:14.548236 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-17 01:12:14.548240 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-17 01:12:14.548244 | orchestrator | 2026-01-17 01:12:14.548334 | orchestrator | 2026-01-17 01:12:14.548607 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-17 01:12:14.548618 | orchestrator | Saturday 17 January 2026 01:11:42 +0000 (0:02:24.928) 0:02:26.548 ****** 2026-01-17 01:12:14.548622 | orchestrator | =============================================================================== 2026-01-17 01:12:14.548625 | orchestrator | Waiting for Nova public port to be UP --------------------------------- 144.93s 2026-01-17 01:12:14.548630 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.87s 2026-01-17 01:12:14.548634 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.46s 2026-01-17 01:12:14.548637 | orchestrator | 2026-01-17 01:12:14.548641 | orchestrator | 2026-01-17 01:12:14.548645 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-17 01:12:14.548649 | orchestrator | 2026-01-17 01:12:14.548653 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-01-17 01:12:14.548657 | orchestrator | Saturday 17 January 2026 01:09:53 +0000 (0:00:00.255) 0:00:00.255 ****** 2026-01-17 01:12:14.548693 | orchestrator | ok: [testbed-node-0] 2026-01-17 01:12:14.548698 | orchestrator | ok: [testbed-node-1] 2026-01-17 01:12:14.548702 | orchestrator | ok: [testbed-node-2] 2026-01-17 01:12:14.548705 | orchestrator | 2026-01-17 01:12:14.548709 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-17 01:12:14.548738 | orchestrator | Saturday 17 January 2026 01:09:53 +0000 (0:00:00.304) 0:00:00.559 ****** 2026-01-17 01:12:14.548770 | orchestrator | ok: [testbed-node-0] => (item=enable_grafana_True) 2026-01-17 01:12:14.548775 | orchestrator | ok: [testbed-node-1] => (item=enable_grafana_True) 2026-01-17 01:12:14.548778 | orchestrator | ok: [testbed-node-2] => (item=enable_grafana_True) 2026-01-17 01:12:14.548782 | orchestrator | 2026-01-17 01:12:14.548786 | orchestrator | PLAY [Apply role grafana] ****************************************************** 2026-01-17 01:12:14.548790 | orchestrator | 2026-01-17 01:12:14.548793 | orchestrator | TASK [grafana : include_tasks] ************************************************* 2026-01-17 01:12:14.548797 | orchestrator | Saturday 17 January 2026 01:09:54 +0000 (0:00:00.436) 0:00:00.995 ****** 2026-01-17 01:12:14.548801 | orchestrator | included: /ansible/roles/grafana/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-17 01:12:14.548806 | orchestrator | 2026-01-17 01:12:14.548810 | orchestrator | TASK [grafana : Ensuring config directories exist] ***************************** 2026-01-17 01:12:14.548814 | orchestrator | Saturday 17 January 2026 01:09:54 +0000 (0:00:00.525) 0:00:01.521 ****** 2026-01-17 01:12:14.548821 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-01-17 01:12:14.548841 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-01-17 01:12:14.548857 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-01-17 01:12:14.548866 | orchestrator | 2026-01-17 01:12:14.548872 | orchestrator | TASK [grafana : Check if extra configuration file exists] ********************** 2026-01-17 01:12:14.548879 | orchestrator | Saturday 17 January 2026 01:09:55 +0000 (0:00:00.704) 0:00:02.225 ****** 2026-01-17 01:12:14.548886 | orchestrator | [WARNING]: Skipped '/operations/prometheus/grafana' path due to this access 2026-01-17 01:12:14.548892 | orchestrator | issue: '/operations/prometheus/grafana' is not a directory 2026-01-17 01:12:14.548899 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-01-17 01:12:14.548914 | orchestrator | 2026-01-17 01:12:14.548921 | orchestrator | TASK [grafana : include_tasks] ************************************************* 2026-01-17 01:12:14.548948 | orchestrator | Saturday 17 January 2026 01:09:56 +0000 (0:00:00.788) 0:00:03.013 ****** 2026-01-17 01:12:14.548954 | orchestrator | included: /ansible/roles/grafana/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-17 01:12:14.548997 | orchestrator | 2026-01-17 01:12:14.549002 | orchestrator | TASK [service-cert-copy : grafana | Copying over extra CA certificates] ******** 2026-01-17 01:12:14.549110 | orchestrator | Saturday 17 January 2026 01:09:57 +0000 (0:00:00.681) 0:00:03.694 ****** 2026-01-17 01:12:14.549117 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-01-17 01:12:14.549122 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-01-17 01:12:14.549126 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-01-17 01:12:14.549130 | orchestrator | 2026-01-17 01:12:14.549189 | orchestrator | TASK [service-cert-copy : grafana | Copying over backend internal TLS certificate] *** 2026-01-17 01:12:14.549208 | orchestrator | Saturday 17 January 2026 01:09:58 +0000 (0:00:01.271) 0:00:04.966 ****** 2026-01-17 01:12:14.549212 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-01-17 01:12:14.549222 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-01-17 01:12:14.549232 | orchestrator | skipping: [testbed-node-0] 2026-01-17 01:12:14.549237 | orchestrator | skipping: [testbed-node-1] 2026-01-17 01:12:14.549240 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-01-17 01:12:14.549244 | orchestrator | skipping: [testbed-node-2] 2026-01-17 01:12:14.549248 | orchestrator | 2026-01-17 01:12:14.549252 | orchestrator | TASK [service-cert-copy : grafana | Copying over backend internal TLS key] ***** 2026-01-17 01:12:14.549255 | orchestrator | Saturday 17 January 2026 01:09:58 +0000 (0:00:00.371) 0:00:05.338 ****** 2026-01-17 01:12:14.549259 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-01-17 01:12:14.549263 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-01-17 01:12:14.549267 | orchestrator | skipping: [testbed-node-0] 2026-01-17 01:12:14.549271 | orchestrator | skipping: [testbed-node-1] 2026-01-17 01:12:14.549287 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-01-17 01:12:14.549294 | orchestrator | skipping: [testbed-node-2] 2026-01-17 01:12:14.549300 | orchestrator | 2026-01-17 01:12:14.549310 | orchestrator | TASK [grafana : Copying over config.json files] ******************************** 2026-01-17 01:12:14.549316 | orchestrator | Saturday 17 January 2026 01:09:59 +0000 (0:00:00.797) 0:00:06.136 ****** 2026-01-17 01:12:14.549328 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-01-17 01:12:14.549342 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-01-17 01:12:14.549349 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-01-17 01:12:14.549355 | orchestrator | 2026-01-17 01:12:14.549361 | orchestrator | TASK [grafana : Copying over grafana.ini] ************************************** 2026-01-17 01:12:14.549368 | orchestrator | Saturday 17 January 2026 01:10:00 +0000 (0:00:01.269) 0:00:07.406 ****** 2026-01-17 01:12:14.549374 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-01-17 01:12:14.549380 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-01-17 01:12:14.549404 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-01-17 01:12:14.549417 | orchestrator | 2026-01-17 01:12:14.549424 | orchestrator | TASK [grafana : Copying over extra configuration file] ************************* 2026-01-17 01:12:14.549430 | orchestrator | Saturday 17 January 2026 01:10:02 +0000 (0:00:01.322) 0:00:08.728 ****** 2026-01-17 01:12:14.549435 | orchestrator | skipping: [testbed-node-0] 2026-01-17 01:12:14.549443 | orchestrator | skipping: [testbed-node-1] 2026-01-17 01:12:14.549454 | orchestrator | skipping: [testbed-node-2] 2026-01-17 01:12:14.549459 | orchestrator | 2026-01-17 01:12:14.549465 | orchestrator | TASK [grafana : Configuring Prometheus as data source for Grafana] ************* 2026-01-17 01:12:14.549471 | orchestrator | Saturday 17 January 2026 01:10:02 +0000 (0:00:00.545) 0:00:09.274 ****** 2026-01-17 01:12:14.549477 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2026-01-17 01:12:14.549483 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2026-01-17 01:12:14.549489 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2026-01-17 01:12:14.549494 | orchestrator | 2026-01-17 01:12:14.549500 | orchestrator | TASK [grafana : Configuring dashboards provisioning] *************************** 2026-01-17 01:12:14.549505 | orchestrator | Saturday 17 January 2026 01:10:04 +0000 (0:00:01.478) 0:00:10.753 ****** 2026-01-17 01:12:14.549511 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2026-01-17 01:12:14.549518 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2026-01-17 01:12:14.549523 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2026-01-17 01:12:14.549530 | orchestrator | 2026-01-17 01:12:14.549536 | orchestrator | TASK [grafana : Find custom grafana dashboards] ******************************** 2026-01-17 01:12:14.549542 | orchestrator | Saturday 17 January 2026 01:10:05 +0000 (0:00:01.461) 0:00:12.214 ****** 2026-01-17 01:12:14.549547 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-01-17 01:12:14.549553 | orchestrator | 2026-01-17 01:12:14.549559 | orchestrator | TASK [grafana : Find templated grafana dashboards] ***************************** 2026-01-17 01:12:14.549566 | orchestrator | Saturday 17 January 2026 01:10:06 +0000 (0:00:00.790) 0:00:13.004 ****** 2026-01-17 01:12:14.549572 | orchestrator | [WARNING]: Skipped '/etc/kolla/grafana/dashboards' path due to this access 2026-01-17 01:12:14.549578 | orchestrator | issue: '/etc/kolla/grafana/dashboards' is not a directory 2026-01-17 01:12:14.549584 | orchestrator | ok: [testbed-node-0] 2026-01-17 01:12:14.549590 | orchestrator | ok: [testbed-node-1] 2026-01-17 01:12:14.549597 | orchestrator | ok: [testbed-node-2] 2026-01-17 01:12:14.549603 | orchestrator | 2026-01-17 01:12:14.549610 | orchestrator | TASK [grafana : Prune templated Grafana dashboards] **************************** 2026-01-17 01:12:14.549616 | orchestrator | Saturday 17 January 2026 01:10:07 +0000 (0:00:00.743) 0:00:13.748 ****** 2026-01-17 01:12:14.549622 | orchestrator | skipping: [testbed-node-0] 2026-01-17 01:12:14.549626 | orchestrator | skipping: [testbed-node-1] 2026-01-17 01:12:14.549630 | orchestrator | skipping: [testbed-node-2] 2026-01-17 01:12:14.549633 | orchestrator | 2026-01-17 01:12:14.549637 | orchestrator | TASK [grafana : Copying over custom dashboards] ******************************** 2026-01-17 01:12:14.549641 | orchestrator | Saturday 17 January 2026 01:10:07 +0000 (0:00:00.523) 0:00:14.271 ****** 2026-01-17 01:12:14.549646 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 117836, 'inode': 1097169, 'dev': 119, 'nlink': 1, 'atime': 1768521739.0, 'mtime': 1768521739.0, 'ctime': 1768609111.3811872, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-17 01:12:14.549671 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 117836, 'inode': 1097169, 'dev': 119, 'nlink': 1, 'atime': 1768521739.0, 'mtime': 1768521739.0, 'ctime': 1768609111.3811872, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-17 01:12:14.549676 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 117836, 'inode': 1097169, 'dev': 119, 'nlink': 1, 'atime': 1768521739.0, 'mtime': 1768521739.0, 'ctime': 1768609111.3811872, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-17 01:12:14.549691 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25686, 'inode': 1097210, 'dev': 119, 'nlink': 1, 'atime': 1768521739.0, 'mtime': 1768521739.0, 'ctime': 1768609111.393237, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-17 01:12:14.549696 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25686, 'inode': 1097210, 'dev': 119, 'nlink': 1, 'atime': 1768521739.0, 'mtime': 1768521739.0, 'ctime': 1768609111.393237, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-17 01:12:14.549700 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25686, 'inode': 1097210, 'dev': 119, 'nlink': 1, 'atime': 1768521739.0, 'mtime': 1768521739.0, 'ctime': 1768609111.393237, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-17 01:12:14.549704 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1097184, 'dev': 119, 'nlink': 1, 'atime': 1768521739.0, 'mtime': 1768521739.0, 'ctime': 1768609111.3828967, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-17 01:12:14.549712 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1097184, 'dev': 119, 'nlink': 1, 'atime': 1768521739.0, 'mtime': 1768521739.0, 'ctime': 1768609111.3828967, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-17 01:12:14.549726 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1097184, 'dev': 119, 'nlink': 1, 'atime': 1768521739.0, 'mtime': 1768521739.0, 'ctime': 1768609111.3828967, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-17 01:12:14.549733 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 167897, 'inode': 1097211, 'dev': 119, 'nlink': 1, 'atime': 1768521739.0, 'mtime': 1768521739.0, 'ctime': 1768609111.3950498, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-17 01:12:14.549737 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 167897, 'inode': 1097211, 'dev': 119, 'nlink': 1, 'atime': 1768521739.0, 'mtime': 1768521739.0, 'ctime': 1768609111.3950498, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-17 01:12:14.549741 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 167897, 'inode': 1097211, 'dev': 119, 'nlink': 1, 'atime': 1768521739.0, 'mtime': 1768521739.0, 'ctime': 1768609111.3950498, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-17 01:12:14.549745 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26655, 'inode': 1097194, 'dev': 119, 'nlink': 1, 'atime': 1768521739.0, 'mtime': 1768521739.0, 'ctime': 1768609111.3868217, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-17 01:12:14.549752 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26655, 'inode': 1097194, 'dev': 119, 'nlink': 1, 'atime': 1768521739.0, 'mtime': 1768521739.0, 'ctime': 1768609111.3868217, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-17 01:12:14.549766 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26655, 'inode': 1097194, 'dev': 119, 'nlink': 1, 'atime': 1768521739.0, 'mtime': 1768521739.0, 'ctime': 1768609111.3868217, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-17 01:12:14.549774 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 39556, 'inode': 1097204, 'dev': 119, 'nlink': 1, 'atime': 1768521739.0, 'mtime': 1768521739.0, 'ctime': 1768609111.3911388, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-17 01:12:14.549779 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 39556, 'inode': 1097204, 'dev': 119, 'nlink': 1, 'atime': 1768521739.0, 'mtime': 1768521739.0, 'ctime': 1768609111.3911388, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-17 01:12:14.549783 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 39556, 'inode': 1097204, 'dev': 119, 'nlink': 1, 'atime': 1768521739.0, 'mtime': 1768521739.0, 'ctime': 1768609111.3911388, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-17 01:12:14.549789 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1097167, 'dev': 119, 'nlink': 1, 'atime': 1768521739.0, 'mtime': 1768521739.0, 'ctime': 1768609111.3794215, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-17 01:12:14.549796 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1097167, 'dev': 119, 'nlink': 1, 'atime': 1768521739.0, 'mtime': 1768521739.0, 'ctime': 1768609111.3794215, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-17 01:12:14.549810 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1097167, 'dev': 119, 'nlink': 1, 'atime': 1768521739.0, 'mtime': 1768521739.0, 'ctime': 1768609111.3794215, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-17 01:12:14.549815 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1097178, 'dev': 119, 'nlink': 1, 'atime': 1768521739.0, 'mtime': 1768521739.0, 'ctime': 1768609111.3818376, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-17 01:12:14.549822 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1097178, 'dev': 119, 'nlink': 1, 'atime': 1768521739.0, 'mtime': 1768521739.0, 'ctime': 1768609111.3818376, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-17 01:12:14.549827 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1097178, 'dev': 119, 'nlink': 1, 'atime': 1768521739.0, 'mtime': 1768521739.0, 'ctime': 1768609111.3818376, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-17 01:12:14.549831 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1097185, 'dev': 119, 'nlink': 1, 'atime': 1768521739.0, 'mtime': 1768521739.0, 'ctime': 1768609111.383991, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-17 01:12:14.549839 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1097185, 'dev': 119, 'nlink': 1, 'atime': 1768521739.0, 'mtime': 1768521739.0, 'ctime': 1768609111.383991, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-17 01:12:14.549844 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1097185, 'dev': 119, 'nlink': 1, 'atime': 1768521739.0, 'mtime': 1768521739.0, 'ctime': 1768609111.383991, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-17 01:12:14.549860 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19609, 'inode': 1097198, 'dev': 119, 'nlink': 1, 'atime': 1768521739.0, 'mtime': 1768521739.0, 'ctime': 1768609111.3891044, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-17 01:12:14.549867 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19609, 'inode': 1097198, 'dev': 119, 'nlink': 1, 'atime': 1768521739.0, 'mtime': 1768521739.0, 'ctime': 1768609111.3891044, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-17 01:12:14.549872 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19609, 'inode': 1097198, 'dev': 119, 'nlink': 1, 'atime': 1768521739.0, 'mtime': 1768521739.0, 'ctime': 1768609111.3891044, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-17 01:12:14.549877 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12997, 'inode': 1097208, 'dev': 119, 'nlink': 1, 'atime': 1768521739.0, 'mtime': 1768521739.0, 'ctime': 1768609111.3918219, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-17 01:12:14.549881 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12997, 'inode': 1097208, 'dev': 119, 'nlink': 1, 'atime': 1768521739.0, 'mtime': 1768521739.0, 'ctime': 1768609111.3918219, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-17 01:12:14.549889 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12997, 'inode': 1097208, 'dev': 119, 'nlink': 1, 'atime': 1768521739.0, 'mtime': 1768521739.0, 'ctime': 1768609111.3918219, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-17 01:12:14.549897 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1097182, 'dev': 119, 'nlink': 1, 'atime': 1768521739.0, 'mtime': 1768521739.0, 'ctime': 1768609111.3818376, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-17 01:12:14.549901 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1097182, 'dev': 119, 'nlink': 1, 'atime': 1768521739.0, 'mtime': 1768521739.0, 'ctime': 1768609111.3818376, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-17 01:12:14.549908 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1097182, 'dev': 119, 'nlink': 1, 'atime': 1768521739.0, 'mtime': 1768521739.0, 'ctime': 1768609111.3818376, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-17 01:12:14.549913 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19695, 'inode': 1097202, 'dev': 119, 'nlink': 1, 'atime': 1768521739.0, 'mtime': 1768521739.0, 'ctime': 1768609111.3907342, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-17 01:12:14.549917 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19695, 'inode': 1097202, 'dev': 119, 'nlink': 1, 'atime': 1768521739.0, 'mtime': 1768521739.0, 'ctime': 1768609111.3907342, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-17 01:12:14.549925 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19695, 'inode': 1097202, 'dev': 119, 'nlink': 1, 'atime': 1768521739.0, 'mtime': 1768521739.0, 'ctime': 1768609111.3907342, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-17 01:12:14.549930 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38432, 'inode': 1097195, 'dev': 119, 'nlink': 1, 'atime': 1768521739.0, 'mtime': 1768521739.0, 'ctime': 1768609111.387822, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-17 01:12:14.549940 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38432, 'inode': 1097195, 'dev': 119, 'nlink': 1, 'atime': 1768521739.0, 'mtime': 1768521739.0, 'ctime': 1768609111.387822, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-17 01:12:14.549947 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38432, 'inode': 1097195, 'dev': 119, 'nlink': 1, 'atime': 1768521739.0, 'mtime': 1768521739.0, 'ctime': 1768609111.387822, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-17 01:12:14.549951 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 62676, 'inode': 1097190, 'dev': 119, 'nlink': 1, 'atime': 1768521739.0, 'mtime': 1768521739.0, 'ctime': 1768609111.3868217, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-17 01:12:14.549956 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 62676, 'inode': 1097190, 'dev': 119, 'nlink': 1, 'atime': 1768521739.0, 'mtime': 1768521739.0, 'ctime': 1768609111.3868217, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-17 01:12:14.549964 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 62676, 'inode': 1097190, 'dev': 119, 'nlink': 1, 'atime': 1768521739.0, 'mtime': 1768521739.0, 'ctime': 1768609111.3868217, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-17 01:12:14.549968 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27218, 'inode': 1097188, 'dev': 119, 'nlink': 1, 'atime': 1768521739.0, 'mtime': 1768521739.0, 'ctime': 1768609111.3847022, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-17 01:12:14.549976 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27218, 'inode': 1097188, 'dev': 119, 'nlink': 1, 'atime': 1768521739.0, 'mtime': 1768521739.0, 'ctime': 1768609111.3847022, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-17 01:12:14.549983 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27218, 'inode': 1097188, 'dev': 119, 'nlink': 1, 'atime': 1768521739.0, 'mtime': 1768521739.0, 'ctime': 1768609111.3847022, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-17 01:12:14.549988 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49139, 'inode': 1097200, 'dev': 119, 'nlink': 1, 'atime': 1768521739.0, 'mtime': 1768521739.0, 'ctime': 1768609111.3899715, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-17 01:12:14.549993 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49139, 'inode': 1097200, 'dev': 119, 'nlink': 1, 'atime': 1768521739.0, 'mtime': 1768521739.0, 'ctime': 1768609111.3899715, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-17 01:12:14.550001 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49139, 'inode': 1097200, 'dev': 119, 'nlink': 1, 'atime': 1768521739.0, 'mtime': 1768521739.0, 'ctime': 1768609111.3899715, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-17 01:12:14.550006 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 44791, 'inode': 1097187, 'dev': 119, 'nlink': 1, 'atime': 1768521739.0, 'mtime': 1768521739.0, 'ctime': 1768609111.3842523, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-17 01:12:14.550065 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 44791, 'inode': 1097187, 'dev': 119, 'nlink': 1, 'atime': 1768521739.0, 'mtime': 1768521739.0, 'ctime': 1768609111.3842523, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-17 01:12:14.550073 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16156, 'inode': 1097206, 'dev': 119, 'nlink': 1, 'atime': 1768521739.0, 'mtime': 1768521739.0, 'ctime': 1768609111.3914852, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-17 01:12:14.550080 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 44791, 'inode': 1097187, 'dev': 119, 'nlink': 1, 'atime': 1768521739.0, 'mtime': 1768521739.0, 'ctime': 1768609111.3842523, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-17 01:12:14.550085 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16156, 'inode': 1097206, 'dev': 119, 'nlink': 1, 'atime': 1768521739.0, 'mtime': 1768521739.0, 'ctime': 1768609111.3914852, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-17 01:12:14.550093 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1097350, 'dev': 119, 'nlink': 1, 'atime': 1768521739.0, 'mtime': 1768521739.0, 'ctime': 1768609111.4795985, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-17 01:12:14.550098 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16156, 'inode': 1097206, 'dev': 119, 'nlink': 1, 'atime': 1768521739.0, 'mtime': 1768521739.0, 'ctime': 1768609111.3914852, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-17 01:12:14.550102 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1097350, 'dev': 119, 'nlink': 1, 'atime': 1768521739.0, 'mtime': 1768521739.0, 'ctime': 1768609111.4795985, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-17 01:12:14.550110 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1097244, 'dev': 119, 'nlink': 1, 'atime': 1768521739.0, 'mtime': 1768521739.0, 'ctime': 1768609111.409822, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-17 01:12:14.550117 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1097350, 'dev': 119, 'nlink': 1, 'atime': 1768521739.0, 'mtime': 1768521739.0, 'ctime': 1768609111.4795985, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-17 01:12:14.550122 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1097244, 'dev': 119, 'nlink': 1, 'atime': 1768521739.0, 'mtime': 1768521739.0, 'ctime': 1768609111.409822, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-17 01:12:14.550130 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1097227, 'dev': 119, 'nlink': 1, 'atime': 1768521739.0, 'mtime': 1768521739.0, 'ctime': 1768609111.3992467, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-17 01:12:14.550158 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1097244, 'dev': 119, 'nlink': 1, 'atime': 1768521739.0, 'mtime': 1768521739.0, 'ctime': 1768609111.409822, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-17 01:12:14.550164 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1097227, 'dev': 119, 'nlink': 1, 'atime': 1768521739.0, 'mtime': 1768521739.0, 'ctime': 1768609111.3992467, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-17 01:12:14.550171 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15725, 'inode': 1097253, 'dev': 119, 'nlink': 1, 'atime': 1768521739.0, 'mtime': 1768521739.0, 'ctime': 1768609111.4133372, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-17 01:12:14.550178 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1097227, 'dev': 119, 'nlink': 1, 'atime': 1768521739.0, 'mtime': 1768521739.0, 'ctime': 1768609111.3992467, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-17 01:12:14.550182 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15725, 'inode': 1097253, 'dev': 119, 'nlink': 1, 'atime': 1768521739.0, 'mtime': 1768521739.0, 'ctime': 1768609111.4133372, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-17 01:12:14.550189 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/alertmanager-overview.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/alertmanager-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9645, 'inode': 1097217, 'dev': 119, 'nlink': 1, 'atime': 1768521739.0, 'mtime': 1768521739.0, 'ctime': 1768609111.3955126, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-17 01:12:14.550193 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15725, 'inode': 1097253, 'dev': 119, 'nlink': 1, 'atime': 1768521739.0, 'mtime': 1768521739.0, 'ctime': 1768609111.4133372, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-17 01:12:14.550197 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/alertmanager-overview.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/alertmanager-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9645, 'inode': 1097217, 'dev': 119, 'nlink': 1, 'atime': 1768521739.0, 'mtime': 1768521739.0, 'ctime': 1768609111.3955126, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-17 01:12:14.550206 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1097260, 'dev': 119, 'nlink': 1, 'atime': 1768521739.0, 'mtime': 1768521739.0, 'ctime': 1768609111.424822, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-17 01:12:14.550212 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1097260, 'dev': 119, 'nlink': 1, 'atime': 1768521739.0, 'mtime': 1768521739.0, 'ctime': 1768609111.424822, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-17 01:12:14.550216 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/alertmanager-overview.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/alertmanager-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9645, 'inode': 1097217, 'dev': 119, 'nlink': 1, 'atime': 1768521739.0, 'mtime': 1768521739.0, 'ctime': 1768609111.3955126, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-17 01:12:14.550223 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1097254, 'dev': 119, 'nlink': 1, 'atime': 1768521739.0, 'mtime': 1768521739.0, 'ctime': 1768609111.419822, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-17 01:12:14.550227 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1097254, 'dev': 119, 'nlink': 1, 'atime': 1768521739.0, 'mtime': 1768521739.0, 'ctime': 1768609111.419822, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-17 01:12:14.550231 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1097260, 'dev': 119, 'nlink': 1, 'atime': 1768521739.0, 'mtime': 1768521739.0, 'ctime': 1768609111.424822, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-17 01:12:14.550238 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus-remote-write.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus-remote-write.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 22317, 'inode': 1097261, 'dev': 119, 'nlink': 1, 'atime': 1768521739.0, 'mtime': 1768521739.0, 'ctime': 1768609111.424822, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-17 01:12:14.550242 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus-remote-write.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus-remote-write.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 22317, 'inode': 1097261, 'dev': 119, 'nlink': 1, 'atime': 1768521739.0, 'mtime': 1768521739.0, 'ctime': 1768609111.424822, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-17 01:12:14.550249 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1097254, 'dev': 119, 'nlink': 1, 'atime': 1768521739.0, 'mtime': 1768521739.0, 'ctime': 1768609111.419822, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-17 01:12:14.550256 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1097346, 'dev': 119, 'nlink': 1, 'atime': 1768521739.0, 'mtime': 1768521739.0, 'ctime': 1768609111.4786136, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-17 01:12:14.550260 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1097346, 'dev': 119, 'nlink': 1, 'atime': 1768521739.0, 'mtime': 1768521739.0, 'ctime': 1768609111.4786136, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-17 01:12:14.550264 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus-remote-write.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus-remote-write.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 22317, 'inode': 1097261, 'dev': 119, 'nlink': 1, 'atime': 1768521739.0, 'mtime': 1768521739.0, 'ctime': 1768609111.424822, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-17 01:12:14.550271 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/nodes.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/nodes.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21109, 'inode': 1097258, 'dev': 119, 'nlink': 1, 'atime': 1768521739.0, 'mtime': 1768521739.0, 'ctime': 1768609111.4208221, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-17 01:12:14.550275 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/nodes.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/nodes.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21109, 'inode': 1097258, 'dev': 119, 'nlink': 1, 'atime': 1768521739.0, 'mtime': 1768521739.0, 'ctime': 1768609111.4208221, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-17 01:12:14.550281 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1097346, 'dev': 119, 'nlink': 1, 'atime': 1768521739.0, 'mtime': 1768521739.0, 'ctime': 1768609111.4786136, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-17 01:12:14.550289 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1097251, 'dev': 119, 'nlink': 1, 'atime': 1768521739.0, 'mtime': 1768521739.0, 'ctime': 1768609111.411822, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-17 01:12:14.550293 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1097251, 'dev': 119, 'nlink': 1, 'atime': 1768521739.0, 'mtime': 1768521739.0, 'ctime': 1768609111.411822, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-17 01:12:14.550300 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/nodes.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/nodes.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21109, 'inode': 1097258, 'dev': 119, 'nlink': 1, 'atime': 1768521739.0, 'mtime': 1768521739.0, 'ctime': 1768609111.4208221, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-17 01:12:14.550306 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1097242, 'dev': 119, 'nlink': 1, 'atime': 1768521739.0, 'mtime': 1768521739.0, 'ctime': 1768609111.403822, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-17 01:12:14.550317 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1097242, 'dev': 119, 'nlink': 1, 'atime': 1768521739.0, 'mtime': 1768521739.0, 'ctime': 1768609111.403822, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-17 01:12:14.550326 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1097251, 'dev': 119, 'nlink': 1, 'atime': 1768521739.0, 'mtime': 1768521739.0, 'ctime': 1768609111.411822, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-17 01:12:14.550338 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1097250, 'dev': 119, 'nlink': 1, 'atime': 1768521739.0, 'mtime': 1768521739.0, 'ctime': 1768609111.411822, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-17 01:12:14.550344 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1097250, 'dev': 119, 'nlink': 1, 'atime': 1768521739.0, 'mtime': 1768521739.0, 'ctime': 1768609111.411822, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-17 01:12:14.550351 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1097242, 'dev': 119, 'nlink': 1, 'atime': 1768521739.0, 'mtime': 1768521739.0, 'ctime': 1768609111.403822, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-17 01:12:14.550358 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1097234, 'dev': 119, 'nlink': 1, 'atime': 1768521739.0, 'mtime': 1768521739.0, 'ctime': 1768609111.402822, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-17 01:12:14.550369 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1097234, 'dev': 119, 'nlink': 1, 'atime': 1768521739.0, 'mtime': 1768521739.0, 'ctime': 1768609111.402822, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-17 01:12:14.550380 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1097250, 'dev': 119, 'nlink': 1, 'atime': 1768521739.0, 'mtime': 1768521739.0, 'ctime': 1768609111.411822, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-17 01:12:14.550389 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node-cluster-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-cluster-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16098, 'inode': 1097252, 'dev': 119, 'nlink': 1, 'atime': 1768521739.0, 'mtime': 1768521739.0, 'ctime': 1768609111.413124, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-17 01:12:14.550395 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node-cluster-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-cluster-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16098, 'inode': 1097252, 'dev': 119, 'nlink': 1, 'atime': 1768521739.0, 'mtime': 1768521739.0, 'ctime': 1768609111.413124, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-17 01:12:14.550400 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1097234, 'dev': 119, 'nlink': 1, 'atime': 1768521739.0, 'mtime': 1768521739.0, 'ctime': 1768609111.402822, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-17 01:12:14.550406 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1097334, 'dev': 119, 'nlink': 1, 'atime': 1768521739.0, 'mtime': 1768521739.0, 'ctime': 1768609111.478227, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-17 01:12:14.550415 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1097334, 'dev': 119, 'nlink': 1, 'atime': 1768521739.0, 'mtime': 1768521739.0, 'ctime': 1768609111.478227, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-17 01:12:14.550500 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node-cluster-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-cluster-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16098, 'inode': 1097252, 'dev': 119, 'nlink': 1, 'atime': 1768521739.0, 'mtime': 1768521739.0, 'ctime': 1768609111.413124, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-17 01:12:14.550527 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1097329, 'dev': 119, 'nlink': 1, 'atime': 1768521739.0, 'mtime': 1768521739.0, 'ctime': 1768609111.4751494, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-17 01:12:14.550534 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1097329, 'dev': 119, 'nlink': 1, 'atime': 1768521739.0, 'mtime': 1768521739.0, 'ctime': 1768609111.4751494, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-17 01:12:14.550540 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1097334, 'dev': 119, 'nlink': 1, 'atime': 1768521739.0, 'mtime': 1768521739.0, 'ctime': 1768609111.478227, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-17 01:12:14.550545 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1097219, 'dev': 119, 'nlink': 1, 'atime': 1768521739.0, 'mtime': 1768521739.0, 'ctime': 1768609111.3959177, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-17 01:12:14.550556 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1097219, 'dev': 119, 'nlink': 1, 'atime': 1768521739.0, 'mtime': 1768521739.0, 'ctime': 1768609111.3959177, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-17 01:12:14.550561 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1097329, 'dev': 119, 'nlink': 1, 'atime': 1768521739.0, 'mtime': 1768521739.0, 'ctime': 1768609111.4751494, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-17 01:12:14.550582 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1097224, 'dev': 119, 'nlink': 1, 'atime': 1768521739.0, 'mtime': 1768521739.0, 'ctime': 1768609111.396619, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-17 01:12:14.550588 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1097224, 'dev': 119, 'nlink': 1, 'atime': 1768521739.0, 'mtime': 1768521739.0, 'ctime': 1768609111.396619, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-17 01:12:14.550594 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1097219, 'dev': 119, 'nlink': 1, 'atime': 1768521739.0, 'mtime': 1768521739.0, 'ctime': 1768609111.3959177, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-17 01:12:14.550600 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1097257, 'dev': 119, 'nlink': 1, 'atime': 1768521739.0, 'mtime': 1768521739.0, 'ctime': 1768609111.4208221, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-17 01:12:14.550607 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1097257, 'dev': 119, 'nlink': 1, 'atime': 1768521739.0, 'mtime': 1768521739.0, 'ctime': 1768609111.4208221, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-17 01:12:14.550621 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1097224, 'dev': 119, 'nlink': 1, 'atime': 1768521739.0, 'mtime': 1768521739.0, 'ctime': 1768609111.396619, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-17 01:12:14.550631 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21898, 'inode': 1097264, 'dev': 119, 'nlink': 1, 'atime': 1768521739.0, 'mtime': 1768521739.0, 'ctime': 1768609111.426411, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-17 01:12:14.550638 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21898, 'inode': 1097264, 'dev': 119, 'nlink': 1, 'atime': 1768521739.0, 'mtime': 1768521739.0, 'ctime': 1768609111.426411, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-17 01:12:14.550643 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1097257, 'dev': 119, 'nlink': 1, 'atime': 1768521739.0, 'mtime': 1768521739.0, 'ctime': 1768609111.4208221, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-17 01:12:14.550650 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21898, 'inode': 1097264, 'dev': 119, 'nlink': 1, 'atime': 1768521739.0, 'mtime': 1768521739.0, 'ctime': 1768609111.426411, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-17 01:12:14.550657 | orchestrator | 2026-01-17 01:12:14.550663 | orchestrator | TASK [grafana : Check grafana containers] ************************************** 2026-01-17 01:12:14.550669 | orchestrator | Saturday 17 January 2026 01:10:43 +0000 (0:00:35.539) 0:00:49.811 ****** 2026-01-17 01:12:14.550675 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-01-17 01:12:14.550693 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-01-17 01:12:14.550702 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-01-17 01:12:14.550708 | orchestrator | 2026-01-17 01:12:14.550714 | orchestrator | TASK [grafana : Creating grafana database] ************************************* 2026-01-17 01:12:14.550720 | orchestrator | Saturday 17 January 2026 01:10:44 +0000 (0:00:00.953) 0:00:50.765 ****** 2026-01-17 01:12:14.550726 | orchestrator | changed: [testbed-node-0] 2026-01-17 01:12:14.550733 | orchestrator | 2026-01-17 01:12:14.550739 | orchestrator | TASK [grafana : Creating grafana database user and setting permissions] ******** 2026-01-17 01:12:14.550744 | orchestrator | Saturday 17 January 2026 01:10:46 +0000 (0:00:02.200) 0:00:52.965 ****** 2026-01-17 01:12:14.550749 | orchestrator | changed: [testbed-node-0] 2026-01-17 01:12:14.550755 | orchestrator | 2026-01-17 01:12:14.550761 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2026-01-17 01:12:14.550766 | orchestrator | Saturday 17 January 2026 01:10:48 +0000 (0:00:02.461) 0:00:55.427 ****** 2026-01-17 01:12:14.550772 | orchestrator | 2026-01-17 01:12:14.550777 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2026-01-17 01:12:14.550783 | orchestrator | Saturday 17 January 2026 01:10:48 +0000 (0:00:00.064) 0:00:55.491 ****** 2026-01-17 01:12:14.550789 | orchestrator | 2026-01-17 01:12:14.550795 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2026-01-17 01:12:14.550802 | orchestrator | Saturday 17 January 2026 01:10:48 +0000 (0:00:00.059) 0:00:55.551 ****** 2026-01-17 01:12:14.550808 | orchestrator | 2026-01-17 01:12:14.550814 | orchestrator | RUNNING HANDLER [grafana : Restart first grafana container] ******************** 2026-01-17 01:12:14.550819 | orchestrator | Saturday 17 January 2026 01:10:49 +0000 (0:00:00.229) 0:00:55.780 ****** 2026-01-17 01:12:14.550825 | orchestrator | skipping: [testbed-node-1] 2026-01-17 01:12:14.550831 | orchestrator | skipping: [testbed-node-2] 2026-01-17 01:12:14.550837 | orchestrator | changed: [testbed-node-0] 2026-01-17 01:12:14.550843 | orchestrator | 2026-01-17 01:12:14.550848 | orchestrator | RUNNING HANDLER [grafana : Waiting for grafana to start on first node] ********* 2026-01-17 01:12:14.550854 | orchestrator | Saturday 17 January 2026 01:10:51 +0000 (0:00:02.170) 0:00:57.951 ****** 2026-01-17 01:12:14.550859 | orchestrator | skipping: [testbed-node-1] 2026-01-17 01:12:14.550865 | orchestrator | skipping: [testbed-node-2] 2026-01-17 01:12:14.550870 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (12 retries left). 2026-01-17 01:12:14.550876 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (11 retries left). 2026-01-17 01:12:14.550887 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (10 retries left). 2026-01-17 01:12:14.550893 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (9 retries left). 2026-01-17 01:12:14.550900 | orchestrator | ok: [testbed-node-0] 2026-01-17 01:12:14.550905 | orchestrator | 2026-01-17 01:12:14.550912 | orchestrator | RUNNING HANDLER [grafana : Restart remaining grafana containers] *************** 2026-01-17 01:12:14.550918 | orchestrator | Saturday 17 January 2026 01:11:42 +0000 (0:00:51.178) 0:01:49.130 ****** 2026-01-17 01:12:14.550924 | orchestrator | skipping: [testbed-node-0] 2026-01-17 01:12:14.550929 | orchestrator | changed: [testbed-node-1] 2026-01-17 01:12:14.550935 | orchestrator | changed: [testbed-node-2] 2026-01-17 01:12:14.550941 | orchestrator | 2026-01-17 01:12:14.550946 | orchestrator | TASK [grafana : Wait for grafana application ready] **************************** 2026-01-17 01:12:14.550953 | orchestrator | Saturday 17 January 2026 01:12:07 +0000 (0:00:24.606) 0:02:13.736 ****** 2026-01-17 01:12:14.550959 | orchestrator | ok: [testbed-node-0] 2026-01-17 01:12:14.550964 | orchestrator | 2026-01-17 01:12:14.550970 | orchestrator | TASK [grafana : Remove old grafana docker volume] ****************************** 2026-01-17 01:12:14.550976 | orchestrator | Saturday 17 January 2026 01:12:09 +0000 (0:00:02.544) 0:02:16.280 ****** 2026-01-17 01:12:14.550981 | orchestrator | skipping: [testbed-node-0] 2026-01-17 01:12:14.550987 | orchestrator | skipping: [testbed-node-1] 2026-01-17 01:12:14.550993 | orchestrator | skipping: [testbed-node-2] 2026-01-17 01:12:14.550999 | orchestrator | 2026-01-17 01:12:14.551005 | orchestrator | TASK [grafana : Enable grafana datasources] ************************************ 2026-01-17 01:12:14.551016 | orchestrator | Saturday 17 January 2026 01:12:10 +0000 (0:00:00.544) 0:02:16.825 ****** 2026-01-17 01:12:14.551023 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'influxdb', 'value': {'enabled': False, 'data': {'isDefault': True, 'database': 'telegraf', 'name': 'telegraf', 'type': 'influxdb', 'url': 'https://api-int.testbed.osism.xyz:8086', 'access': 'proxy', 'basicAuth': False}}})  2026-01-17 01:12:14.551031 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'enabled': True, 'data': {'name': 'opensearch', 'type': 'grafana-opensearch-datasource', 'access': 'proxy', 'url': 'https://api-int.testbed.osism.xyz:9200', 'jsonData': {'flavor': 'OpenSearch', 'database': 'flog-*', 'version': '2.11.1', 'timeField': '@timestamp', 'logLevelField': 'log_level'}}}}) 2026-01-17 01:12:14.551037 | orchestrator | 2026-01-17 01:12:14.551043 | orchestrator | TASK [grafana : Disable Getting Started panel] ********************************* 2026-01-17 01:12:14.551049 | orchestrator | Saturday 17 January 2026 01:12:12 +0000 (0:00:02.330) 0:02:19.155 ****** 2026-01-17 01:12:14.551059 | orchestrator | skipping: [testbed-node-0] 2026-01-17 01:12:14.551065 | orchestrator | 2026-01-17 01:12:14.551071 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-17 01:12:14.551079 | orchestrator | testbed-node-0 : ok=21  changed=12  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-01-17 01:12:14.551087 | orchestrator | testbed-node-1 : ok=14  changed=9  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-01-17 01:12:14.551093 | orchestrator | testbed-node-2 : ok=14  changed=9  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-01-17 01:12:14.551100 | orchestrator | 2026-01-17 01:12:14.551106 | orchestrator | 2026-01-17 01:12:14.551111 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-17 01:12:14.551118 | orchestrator | Saturday 17 January 2026 01:12:12 +0000 (0:00:00.289) 0:02:19.445 ****** 2026-01-17 01:12:14.551124 | orchestrator | =============================================================================== 2026-01-17 01:12:14.551129 | orchestrator | grafana : Waiting for grafana to start on first node ------------------- 51.18s 2026-01-17 01:12:14.551186 | orchestrator | grafana : Copying over custom dashboards ------------------------------- 35.54s 2026-01-17 01:12:14.551193 | orchestrator | grafana : Restart remaining grafana containers ------------------------- 24.61s 2026-01-17 01:12:14.551200 | orchestrator | grafana : Wait for grafana application ready ---------------------------- 2.54s 2026-01-17 01:12:14.551207 | orchestrator | grafana : Creating grafana database user and setting permissions -------- 2.46s 2026-01-17 01:12:14.551213 | orchestrator | grafana : Enable grafana datasources ------------------------------------ 2.33s 2026-01-17 01:12:14.551220 | orchestrator | grafana : Creating grafana database ------------------------------------- 2.20s 2026-01-17 01:12:14.551225 | orchestrator | grafana : Restart first grafana container ------------------------------- 2.17s 2026-01-17 01:12:14.551232 | orchestrator | grafana : Configuring Prometheus as data source for Grafana ------------- 1.48s 2026-01-17 01:12:14.551239 | orchestrator | grafana : Configuring dashboards provisioning --------------------------- 1.46s 2026-01-17 01:12:14.551245 | orchestrator | grafana : Copying over grafana.ini -------------------------------------- 1.32s 2026-01-17 01:12:14.551251 | orchestrator | service-cert-copy : grafana | Copying over extra CA certificates -------- 1.27s 2026-01-17 01:12:14.551256 | orchestrator | grafana : Copying over config.json files -------------------------------- 1.27s 2026-01-17 01:12:14.551262 | orchestrator | grafana : Check grafana containers -------------------------------------- 0.95s 2026-01-17 01:12:14.551269 | orchestrator | service-cert-copy : grafana | Copying over backend internal TLS key ----- 0.80s 2026-01-17 01:12:14.551275 | orchestrator | grafana : Find custom grafana dashboards -------------------------------- 0.79s 2026-01-17 01:12:14.551281 | orchestrator | grafana : Check if extra configuration file exists ---------------------- 0.79s 2026-01-17 01:12:14.551287 | orchestrator | grafana : Find templated grafana dashboards ----------------------------- 0.74s 2026-01-17 01:12:14.551293 | orchestrator | grafana : Ensuring config directories exist ----------------------------- 0.70s 2026-01-17 01:12:14.551299 | orchestrator | grafana : include_tasks ------------------------------------------------- 0.68s 2026-01-17 01:12:14.551304 | orchestrator | 2026-01-17 01:12:14 | INFO  | Task ded2d698-a8bd-4275-a3c9-8f83c90a5e3a is in state STARTED 2026-01-17 01:12:14.552431 | orchestrator | 2026-01-17 01:12:14 | INFO  | Task 581d3af9-7e47-4e2b-9cab-6df33ae22e4c is in state STARTED 2026-01-17 01:12:14.552475 | orchestrator | 2026-01-17 01:12:14 | INFO  | Wait 1 second(s) until the next check 2026-01-17 01:12:17.593975 | orchestrator | 2026-01-17 01:12:17 | INFO  | Task ded2d698-a8bd-4275-a3c9-8f83c90a5e3a is in state STARTED 2026-01-17 01:12:17.594257 | orchestrator | 2026-01-17 01:12:17 | INFO  | Task 581d3af9-7e47-4e2b-9cab-6df33ae22e4c is in state STARTED 2026-01-17 01:12:17.594270 | orchestrator | 2026-01-17 01:12:17 | INFO  | Wait 1 second(s) until the next check 2026-01-17 01:12:20.648312 | orchestrator | 2026-01-17 01:12:20 | INFO  | Task ded2d698-a8bd-4275-a3c9-8f83c90a5e3a is in state STARTED 2026-01-17 01:12:20.649030 | orchestrator | 2026-01-17 01:12:20 | INFO  | Task 581d3af9-7e47-4e2b-9cab-6df33ae22e4c is in state STARTED 2026-01-17 01:12:20.649284 | orchestrator | 2026-01-17 01:12:20 | INFO  | Wait 1 second(s) until the next check 2026-01-17 01:12:23.715987 | orchestrator | 2026-01-17 01:12:23 | INFO  | Task ded2d698-a8bd-4275-a3c9-8f83c90a5e3a is in state STARTED 2026-01-17 01:12:23.716686 | orchestrator | 2026-01-17 01:12:23 | INFO  | Task 581d3af9-7e47-4e2b-9cab-6df33ae22e4c is in state STARTED 2026-01-17 01:12:23.716722 | orchestrator | 2026-01-17 01:12:23 | INFO  | Wait 1 second(s) until the next check 2026-01-17 01:12:26.780296 | orchestrator | 2026-01-17 01:12:26 | INFO  | Task ded2d698-a8bd-4275-a3c9-8f83c90a5e3a is in state STARTED 2026-01-17 01:12:26.780346 | orchestrator | 2026-01-17 01:12:26 | INFO  | Task 581d3af9-7e47-4e2b-9cab-6df33ae22e4c is in state STARTED 2026-01-17 01:12:26.780393 | orchestrator | 2026-01-17 01:12:26 | INFO  | Wait 1 second(s) until the next check 2026-01-17 01:12:29.821751 | orchestrator | 2026-01-17 01:12:29 | INFO  | Task ded2d698-a8bd-4275-a3c9-8f83c90a5e3a is in state STARTED 2026-01-17 01:12:29.823971 | orchestrator | 2026-01-17 01:12:29 | INFO  | Task 581d3af9-7e47-4e2b-9cab-6df33ae22e4c is in state STARTED 2026-01-17 01:12:29.824491 | orchestrator | 2026-01-17 01:12:29 | INFO  | Wait 1 second(s) until the next check 2026-01-17 01:12:32.866488 | orchestrator | 2026-01-17 01:12:32 | INFO  | Task ded2d698-a8bd-4275-a3c9-8f83c90a5e3a is in state STARTED 2026-01-17 01:12:32.866680 | orchestrator | 2026-01-17 01:12:32 | INFO  | Task 581d3af9-7e47-4e2b-9cab-6df33ae22e4c is in state STARTED 2026-01-17 01:12:32.866694 | orchestrator | 2026-01-17 01:12:32 | INFO  | Wait 1 second(s) until the next check 2026-01-17 01:12:35.917071 | orchestrator | 2026-01-17 01:12:35 | INFO  | Task ded2d698-a8bd-4275-a3c9-8f83c90a5e3a is in state STARTED 2026-01-17 01:12:35.917520 | orchestrator | 2026-01-17 01:12:35 | INFO  | Task 581d3af9-7e47-4e2b-9cab-6df33ae22e4c is in state STARTED 2026-01-17 01:12:35.917544 | orchestrator | 2026-01-17 01:12:35 | INFO  | Wait 1 second(s) until the next check 2026-01-17 01:12:38.982245 | orchestrator | 2026-01-17 01:12:38 | INFO  | Task ded2d698-a8bd-4275-a3c9-8f83c90a5e3a is in state STARTED 2026-01-17 01:12:38.982502 | orchestrator | 2026-01-17 01:12:38 | INFO  | Task 581d3af9-7e47-4e2b-9cab-6df33ae22e4c is in state STARTED 2026-01-17 01:12:38.982521 | orchestrator | 2026-01-17 01:12:38 | INFO  | Wait 1 second(s) until the next check 2026-01-17 01:12:42.029386 | orchestrator | 2026-01-17 01:12:42 | INFO  | Task ded2d698-a8bd-4275-a3c9-8f83c90a5e3a is in state STARTED 2026-01-17 01:12:42.031256 | orchestrator | 2026-01-17 01:12:42 | INFO  | Task 581d3af9-7e47-4e2b-9cab-6df33ae22e4c is in state STARTED 2026-01-17 01:12:42.031350 | orchestrator | 2026-01-17 01:12:42 | INFO  | Wait 1 second(s) until the next check 2026-01-17 01:12:45.077486 | orchestrator | 2026-01-17 01:12:45 | INFO  | Task ded2d698-a8bd-4275-a3c9-8f83c90a5e3a is in state STARTED 2026-01-17 01:12:45.077592 | orchestrator | 2026-01-17 01:12:45 | INFO  | Task 581d3af9-7e47-4e2b-9cab-6df33ae22e4c is in state STARTED 2026-01-17 01:12:45.077723 | orchestrator | 2026-01-17 01:12:45 | INFO  | Wait 1 second(s) until the next check 2026-01-17 01:12:48.114284 | orchestrator | 2026-01-17 01:12:48 | INFO  | Task ded2d698-a8bd-4275-a3c9-8f83c90a5e3a is in state STARTED 2026-01-17 01:12:48.117281 | orchestrator | 2026-01-17 01:12:48 | INFO  | Task 581d3af9-7e47-4e2b-9cab-6df33ae22e4c is in state STARTED 2026-01-17 01:12:48.117680 | orchestrator | 2026-01-17 01:12:48 | INFO  | Wait 1 second(s) until the next check 2026-01-17 01:12:51.164269 | orchestrator | 2026-01-17 01:12:51 | INFO  | Task ded2d698-a8bd-4275-a3c9-8f83c90a5e3a is in state STARTED 2026-01-17 01:12:51.164469 | orchestrator | 2026-01-17 01:12:51 | INFO  | Task 581d3af9-7e47-4e2b-9cab-6df33ae22e4c is in state STARTED 2026-01-17 01:12:51.164480 | orchestrator | 2026-01-17 01:12:51 | INFO  | Wait 1 second(s) until the next check 2026-01-17 01:12:54.202964 | orchestrator | 2026-01-17 01:12:54 | INFO  | Task ded2d698-a8bd-4275-a3c9-8f83c90a5e3a is in state STARTED 2026-01-17 01:12:54.204864 | orchestrator | 2026-01-17 01:12:54 | INFO  | Task 581d3af9-7e47-4e2b-9cab-6df33ae22e4c is in state STARTED 2026-01-17 01:12:54.205346 | orchestrator | 2026-01-17 01:12:54 | INFO  | Wait 1 second(s) until the next check 2026-01-17 01:12:57.250370 | orchestrator | 2026-01-17 01:12:57 | INFO  | Task ded2d698-a8bd-4275-a3c9-8f83c90a5e3a is in state STARTED 2026-01-17 01:12:57.253014 | orchestrator | 2026-01-17 01:12:57 | INFO  | Task 581d3af9-7e47-4e2b-9cab-6df33ae22e4c is in state STARTED 2026-01-17 01:12:57.253073 | orchestrator | 2026-01-17 01:12:57 | INFO  | Wait 1 second(s) until the next check 2026-01-17 01:13:00.328100 | orchestrator | 2026-01-17 01:13:00 | INFO  | Task ded2d698-a8bd-4275-a3c9-8f83c90a5e3a is in state STARTED 2026-01-17 01:13:00.334736 | orchestrator | 2026-01-17 01:13:00 | INFO  | Task 581d3af9-7e47-4e2b-9cab-6df33ae22e4c is in state STARTED 2026-01-17 01:13:00.334824 | orchestrator | 2026-01-17 01:13:00 | INFO  | Wait 1 second(s) until the next check 2026-01-17 01:13:03.385925 | orchestrator | 2026-01-17 01:13:03 | INFO  | Task ded2d698-a8bd-4275-a3c9-8f83c90a5e3a is in state STARTED 2026-01-17 01:13:03.386805 | orchestrator | 2026-01-17 01:13:03 | INFO  | Task 581d3af9-7e47-4e2b-9cab-6df33ae22e4c is in state STARTED 2026-01-17 01:13:03.386856 | orchestrator | 2026-01-17 01:13:03 | INFO  | Wait 1 second(s) until the next check 2026-01-17 01:13:06.437322 | orchestrator | 2026-01-17 01:13:06 | INFO  | Task ded2d698-a8bd-4275-a3c9-8f83c90a5e3a is in state STARTED 2026-01-17 01:13:06.438967 | orchestrator | 2026-01-17 01:13:06 | INFO  | Task 581d3af9-7e47-4e2b-9cab-6df33ae22e4c is in state STARTED 2026-01-17 01:13:06.439134 | orchestrator | 2026-01-17 01:13:06 | INFO  | Wait 1 second(s) until the next check 2026-01-17 01:13:09.494199 | orchestrator | 2026-01-17 01:13:09 | INFO  | Task ded2d698-a8bd-4275-a3c9-8f83c90a5e3a is in state STARTED 2026-01-17 01:13:09.496776 | orchestrator | 2026-01-17 01:13:09 | INFO  | Task 581d3af9-7e47-4e2b-9cab-6df33ae22e4c is in state STARTED 2026-01-17 01:13:09.497385 | orchestrator | 2026-01-17 01:13:09 | INFO  | Wait 1 second(s) until the next check 2026-01-17 01:13:12.543005 | orchestrator | 2026-01-17 01:13:12 | INFO  | Task ded2d698-a8bd-4275-a3c9-8f83c90a5e3a is in state STARTED 2026-01-17 01:13:12.545534 | orchestrator | 2026-01-17 01:13:12 | INFO  | Task 581d3af9-7e47-4e2b-9cab-6df33ae22e4c is in state STARTED 2026-01-17 01:13:12.545613 | orchestrator | 2026-01-17 01:13:12 | INFO  | Wait 1 second(s) until the next check 2026-01-17 01:13:15.596215 | orchestrator | 2026-01-17 01:13:15 | INFO  | Task ded2d698-a8bd-4275-a3c9-8f83c90a5e3a is in state STARTED 2026-01-17 01:13:15.597118 | orchestrator | 2026-01-17 01:13:15 | INFO  | Task 581d3af9-7e47-4e2b-9cab-6df33ae22e4c is in state STARTED 2026-01-17 01:13:15.597157 | orchestrator | 2026-01-17 01:13:15 | INFO  | Wait 1 second(s) until the next check 2026-01-17 01:13:18.640476 | orchestrator | 2026-01-17 01:13:18 | INFO  | Task ded2d698-a8bd-4275-a3c9-8f83c90a5e3a is in state STARTED 2026-01-17 01:13:18.642892 | orchestrator | 2026-01-17 01:13:18 | INFO  | Task 581d3af9-7e47-4e2b-9cab-6df33ae22e4c is in state STARTED 2026-01-17 01:13:18.642935 | orchestrator | 2026-01-17 01:13:18 | INFO  | Wait 1 second(s) until the next check 2026-01-17 01:13:21.696895 | orchestrator | 2026-01-17 01:13:21 | INFO  | Task ded2d698-a8bd-4275-a3c9-8f83c90a5e3a is in state STARTED 2026-01-17 01:13:21.697470 | orchestrator | 2026-01-17 01:13:21 | INFO  | Task 581d3af9-7e47-4e2b-9cab-6df33ae22e4c is in state STARTED 2026-01-17 01:13:21.697504 | orchestrator | 2026-01-17 01:13:21 | INFO  | Wait 1 second(s) until the next check 2026-01-17 01:13:24.753422 | orchestrator | 2026-01-17 01:13:24 | INFO  | Task ded2d698-a8bd-4275-a3c9-8f83c90a5e3a is in state STARTED 2026-01-17 01:13:24.754692 | orchestrator | 2026-01-17 01:13:24 | INFO  | Task 581d3af9-7e47-4e2b-9cab-6df33ae22e4c is in state STARTED 2026-01-17 01:13:24.754714 | orchestrator | 2026-01-17 01:13:24 | INFO  | Wait 1 second(s) until the next check 2026-01-17 01:13:27.806420 | orchestrator | 2026-01-17 01:13:27 | INFO  | Task ded2d698-a8bd-4275-a3c9-8f83c90a5e3a is in state STARTED 2026-01-17 01:13:27.808725 | orchestrator | 2026-01-17 01:13:27 | INFO  | Task 581d3af9-7e47-4e2b-9cab-6df33ae22e4c is in state STARTED 2026-01-17 01:13:27.808793 | orchestrator | 2026-01-17 01:13:27 | INFO  | Wait 1 second(s) until the next check 2026-01-17 01:13:30.861485 | orchestrator | 2026-01-17 01:13:30 | INFO  | Task ded2d698-a8bd-4275-a3c9-8f83c90a5e3a is in state STARTED 2026-01-17 01:13:30.863183 | orchestrator | 2026-01-17 01:13:30 | INFO  | Task 581d3af9-7e47-4e2b-9cab-6df33ae22e4c is in state STARTED 2026-01-17 01:13:30.863225 | orchestrator | 2026-01-17 01:13:30 | INFO  | Wait 1 second(s) until the next check 2026-01-17 01:13:33.907717 | orchestrator | 2026-01-17 01:13:33 | INFO  | Task ded2d698-a8bd-4275-a3c9-8f83c90a5e3a is in state STARTED 2026-01-17 01:13:33.908162 | orchestrator | 2026-01-17 01:13:33 | INFO  | Task 581d3af9-7e47-4e2b-9cab-6df33ae22e4c is in state STARTED 2026-01-17 01:13:33.908216 | orchestrator | 2026-01-17 01:13:33 | INFO  | Wait 1 second(s) until the next check 2026-01-17 01:13:36.947941 | orchestrator | 2026-01-17 01:13:36 | INFO  | Task ded2d698-a8bd-4275-a3c9-8f83c90a5e3a is in state STARTED 2026-01-17 01:13:36.948983 | orchestrator | 2026-01-17 01:13:36 | INFO  | Task 581d3af9-7e47-4e2b-9cab-6df33ae22e4c is in state STARTED 2026-01-17 01:13:36.949131 | orchestrator | 2026-01-17 01:13:36 | INFO  | Wait 1 second(s) until the next check 2026-01-17 01:13:39.992957 | orchestrator | 2026-01-17 01:13:39 | INFO  | Task ded2d698-a8bd-4275-a3c9-8f83c90a5e3a is in state STARTED 2026-01-17 01:13:39.993566 | orchestrator | 2026-01-17 01:13:39 | INFO  | Task 581d3af9-7e47-4e2b-9cab-6df33ae22e4c is in state STARTED 2026-01-17 01:13:39.993624 | orchestrator | 2026-01-17 01:13:39 | INFO  | Wait 1 second(s) until the next check 2026-01-17 01:13:43.036362 | orchestrator | 2026-01-17 01:13:43 | INFO  | Task ded2d698-a8bd-4275-a3c9-8f83c90a5e3a is in state STARTED 2026-01-17 01:13:43.038491 | orchestrator | 2026-01-17 01:13:43 | INFO  | Task 581d3af9-7e47-4e2b-9cab-6df33ae22e4c is in state STARTED 2026-01-17 01:13:43.038542 | orchestrator | 2026-01-17 01:13:43 | INFO  | Wait 1 second(s) until the next check 2026-01-17 01:13:46.089012 | orchestrator | 2026-01-17 01:13:46 | INFO  | Task ded2d698-a8bd-4275-a3c9-8f83c90a5e3a is in state STARTED 2026-01-17 01:13:46.089100 | orchestrator | 2026-01-17 01:13:46 | INFO  | Task 581d3af9-7e47-4e2b-9cab-6df33ae22e4c is in state STARTED 2026-01-17 01:13:46.089250 | orchestrator | 2026-01-17 01:13:46 | INFO  | Wait 1 second(s) until the next check 2026-01-17 01:13:49.143962 | orchestrator | 2026-01-17 01:13:49 | INFO  | Task ded2d698-a8bd-4275-a3c9-8f83c90a5e3a is in state STARTED 2026-01-17 01:13:49.145587 | orchestrator | 2026-01-17 01:13:49 | INFO  | Task 581d3af9-7e47-4e2b-9cab-6df33ae22e4c is in state STARTED 2026-01-17 01:13:49.145664 | orchestrator | 2026-01-17 01:13:49 | INFO  | Wait 1 second(s) until the next check 2026-01-17 01:13:52.196160 | orchestrator | 2026-01-17 01:13:52 | INFO  | Task ded2d698-a8bd-4275-a3c9-8f83c90a5e3a is in state STARTED 2026-01-17 01:13:52.196864 | orchestrator | 2026-01-17 01:13:52 | INFO  | Task 581d3af9-7e47-4e2b-9cab-6df33ae22e4c is in state STARTED 2026-01-17 01:13:52.196930 | orchestrator | 2026-01-17 01:13:52 | INFO  | Wait 1 second(s) until the next check 2026-01-17 01:13:55.253958 | orchestrator | 2026-01-17 01:13:55 | INFO  | Task ded2d698-a8bd-4275-a3c9-8f83c90a5e3a is in state STARTED 2026-01-17 01:13:55.255028 | orchestrator | 2026-01-17 01:13:55 | INFO  | Task 581d3af9-7e47-4e2b-9cab-6df33ae22e4c is in state STARTED 2026-01-17 01:13:55.255511 | orchestrator | 2026-01-17 01:13:55 | INFO  | Wait 1 second(s) until the next check 2026-01-17 01:13:58.300290 | orchestrator | 2026-01-17 01:13:58 | INFO  | Task ded2d698-a8bd-4275-a3c9-8f83c90a5e3a is in state STARTED 2026-01-17 01:13:58.300709 | orchestrator | 2026-01-17 01:13:58 | INFO  | Task 581d3af9-7e47-4e2b-9cab-6df33ae22e4c is in state STARTED 2026-01-17 01:13:58.300745 | orchestrator | 2026-01-17 01:13:58 | INFO  | Wait 1 second(s) until the next check 2026-01-17 01:14:01.348523 | orchestrator | 2026-01-17 01:14:01 | INFO  | Task ded2d698-a8bd-4275-a3c9-8f83c90a5e3a is in state STARTED 2026-01-17 01:14:01.350086 | orchestrator | 2026-01-17 01:14:01 | INFO  | Task 581d3af9-7e47-4e2b-9cab-6df33ae22e4c is in state STARTED 2026-01-17 01:14:01.350146 | orchestrator | 2026-01-17 01:14:01 | INFO  | Wait 1 second(s) until the next check 2026-01-17 01:14:04.414894 | orchestrator | 2026-01-17 01:14:04 | INFO  | Task ded2d698-a8bd-4275-a3c9-8f83c90a5e3a is in state STARTED 2026-01-17 01:14:04.419716 | orchestrator | 2026-01-17 01:14:04 | INFO  | Task 581d3af9-7e47-4e2b-9cab-6df33ae22e4c is in state STARTED 2026-01-17 01:14:04.419929 | orchestrator | 2026-01-17 01:14:04 | INFO  | Wait 1 second(s) until the next check 2026-01-17 01:14:07.472141 | orchestrator | 2026-01-17 01:14:07 | INFO  | Task ded2d698-a8bd-4275-a3c9-8f83c90a5e3a is in state STARTED 2026-01-17 01:14:07.474237 | orchestrator | 2026-01-17 01:14:07 | INFO  | Task 581d3af9-7e47-4e2b-9cab-6df33ae22e4c is in state STARTED 2026-01-17 01:14:07.474280 | orchestrator | 2026-01-17 01:14:07 | INFO  | Wait 1 second(s) until the next check 2026-01-17 01:14:10.527097 | orchestrator | 2026-01-17 01:14:10 | INFO  | Task ded2d698-a8bd-4275-a3c9-8f83c90a5e3a is in state STARTED 2026-01-17 01:14:10.528144 | orchestrator | 2026-01-17 01:14:10 | INFO  | Task 581d3af9-7e47-4e2b-9cab-6df33ae22e4c is in state STARTED 2026-01-17 01:14:10.528173 | orchestrator | 2026-01-17 01:14:10 | INFO  | Wait 1 second(s) until the next check 2026-01-17 01:14:13.582793 | orchestrator | 2026-01-17 01:14:13 | INFO  | Task ded2d698-a8bd-4275-a3c9-8f83c90a5e3a is in state STARTED 2026-01-17 01:14:13.585336 | orchestrator | 2026-01-17 01:14:13 | INFO  | Task 581d3af9-7e47-4e2b-9cab-6df33ae22e4c is in state STARTED 2026-01-17 01:14:13.585394 | orchestrator | 2026-01-17 01:14:13 | INFO  | Wait 1 second(s) until the next check 2026-01-17 01:14:16.636716 | orchestrator | 2026-01-17 01:14:16 | INFO  | Task ded2d698-a8bd-4275-a3c9-8f83c90a5e3a is in state STARTED 2026-01-17 01:14:16.639308 | orchestrator | 2026-01-17 01:14:16 | INFO  | Task 581d3af9-7e47-4e2b-9cab-6df33ae22e4c is in state STARTED 2026-01-17 01:14:16.639363 | orchestrator | 2026-01-17 01:14:16 | INFO  | Wait 1 second(s) until the next check 2026-01-17 01:14:19.701788 | orchestrator | 2026-01-17 01:14:19 | INFO  | Task ded2d698-a8bd-4275-a3c9-8f83c90a5e3a is in state STARTED 2026-01-17 01:14:19.705087 | orchestrator | 2026-01-17 01:14:19 | INFO  | Task 581d3af9-7e47-4e2b-9cab-6df33ae22e4c is in state STARTED 2026-01-17 01:14:19.705140 | orchestrator | 2026-01-17 01:14:19 | INFO  | Wait 1 second(s) until the next check 2026-01-17 01:14:22.756773 | orchestrator | 2026-01-17 01:14:22 | INFO  | Task ded2d698-a8bd-4275-a3c9-8f83c90a5e3a is in state STARTED 2026-01-17 01:14:22.758071 | orchestrator | 2026-01-17 01:14:22 | INFO  | Task 581d3af9-7e47-4e2b-9cab-6df33ae22e4c is in state STARTED 2026-01-17 01:14:22.758110 | orchestrator | 2026-01-17 01:14:22 | INFO  | Wait 1 second(s) until the next check 2026-01-17 01:14:25.797645 | orchestrator | 2026-01-17 01:14:25 | INFO  | Task ded2d698-a8bd-4275-a3c9-8f83c90a5e3a is in state STARTED 2026-01-17 01:14:25.799368 | orchestrator | 2026-01-17 01:14:25 | INFO  | Task 581d3af9-7e47-4e2b-9cab-6df33ae22e4c is in state STARTED 2026-01-17 01:14:25.799411 | orchestrator | 2026-01-17 01:14:25 | INFO  | Wait 1 second(s) until the next check 2026-01-17 01:14:28.848466 | orchestrator | 2026-01-17 01:14:28 | INFO  | Task ded2d698-a8bd-4275-a3c9-8f83c90a5e3a is in state STARTED 2026-01-17 01:14:28.851206 | orchestrator | 2026-01-17 01:14:28 | INFO  | Task 581d3af9-7e47-4e2b-9cab-6df33ae22e4c is in state STARTED 2026-01-17 01:14:28.851238 | orchestrator | 2026-01-17 01:14:28 | INFO  | Wait 1 second(s) until the next check 2026-01-17 01:14:31.898230 | orchestrator | 2026-01-17 01:14:31 | INFO  | Task ded2d698-a8bd-4275-a3c9-8f83c90a5e3a is in state STARTED 2026-01-17 01:14:31.898264 | orchestrator | 2026-01-17 01:14:31 | INFO  | Task 581d3af9-7e47-4e2b-9cab-6df33ae22e4c is in state STARTED 2026-01-17 01:14:31.898272 | orchestrator | 2026-01-17 01:14:31 | INFO  | Wait 1 second(s) until the next check 2026-01-17 01:14:34.935417 | orchestrator | 2026-01-17 01:14:34 | INFO  | Task ded2d698-a8bd-4275-a3c9-8f83c90a5e3a is in state STARTED 2026-01-17 01:14:34.935619 | orchestrator | 2026-01-17 01:14:34 | INFO  | Task 581d3af9-7e47-4e2b-9cab-6df33ae22e4c is in state STARTED 2026-01-17 01:14:34.935639 | orchestrator | 2026-01-17 01:14:34 | INFO  | Wait 1 second(s) until the next check 2026-01-17 01:14:37.990313 | orchestrator | 2026-01-17 01:14:37 | INFO  | Task ded2d698-a8bd-4275-a3c9-8f83c90a5e3a is in state STARTED 2026-01-17 01:14:37.992525 | orchestrator | 2026-01-17 01:14:37 | INFO  | Task 581d3af9-7e47-4e2b-9cab-6df33ae22e4c is in state STARTED 2026-01-17 01:14:37.993509 | orchestrator | 2026-01-17 01:14:37 | INFO  | Wait 1 second(s) until the next check 2026-01-17 01:14:41.045582 | orchestrator | 2026-01-17 01:14:41 | INFO  | Task ded2d698-a8bd-4275-a3c9-8f83c90a5e3a is in state STARTED 2026-01-17 01:14:41.048584 | orchestrator | 2026-01-17 01:14:41 | INFO  | Task 581d3af9-7e47-4e2b-9cab-6df33ae22e4c is in state STARTED 2026-01-17 01:14:41.048624 | orchestrator | 2026-01-17 01:14:41 | INFO  | Wait 1 second(s) until the next check 2026-01-17 01:14:44.093734 | orchestrator | 2026-01-17 01:14:44 | INFO  | Task ded2d698-a8bd-4275-a3c9-8f83c90a5e3a is in state STARTED 2026-01-17 01:14:44.095166 | orchestrator | 2026-01-17 01:14:44 | INFO  | Task 581d3af9-7e47-4e2b-9cab-6df33ae22e4c is in state STARTED 2026-01-17 01:14:44.095394 | orchestrator | 2026-01-17 01:14:44 | INFO  | Wait 1 second(s) until the next check 2026-01-17 01:14:47.145057 | orchestrator | 2026-01-17 01:14:47 | INFO  | Task ded2d698-a8bd-4275-a3c9-8f83c90a5e3a is in state STARTED 2026-01-17 01:14:47.148347 | orchestrator | 2026-01-17 01:14:47 | INFO  | Task 581d3af9-7e47-4e2b-9cab-6df33ae22e4c is in state STARTED 2026-01-17 01:14:47.148399 | orchestrator | 2026-01-17 01:14:47 | INFO  | Wait 1 second(s) until the next check 2026-01-17 01:14:50.205507 | orchestrator | 2026-01-17 01:14:50 | INFO  | Task ded2d698-a8bd-4275-a3c9-8f83c90a5e3a is in state STARTED 2026-01-17 01:14:50.207487 | orchestrator | 2026-01-17 01:14:50 | INFO  | Task 581d3af9-7e47-4e2b-9cab-6df33ae22e4c is in state STARTED 2026-01-17 01:14:50.207535 | orchestrator | 2026-01-17 01:14:50 | INFO  | Wait 1 second(s) until the next check 2026-01-17 01:14:53.253422 | orchestrator | 2026-01-17 01:14:53 | INFO  | Task ded2d698-a8bd-4275-a3c9-8f83c90a5e3a is in state STARTED 2026-01-17 01:14:53.253605 | orchestrator | 2026-01-17 01:14:53 | INFO  | Task 581d3af9-7e47-4e2b-9cab-6df33ae22e4c is in state STARTED 2026-01-17 01:14:53.253625 | orchestrator | 2026-01-17 01:14:53 | INFO  | Wait 1 second(s) until the next check 2026-01-17 01:14:56.296479 | orchestrator | 2026-01-17 01:14:56 | INFO  | Task ded2d698-a8bd-4275-a3c9-8f83c90a5e3a is in state STARTED 2026-01-17 01:14:56.297420 | orchestrator | 2026-01-17 01:14:56 | INFO  | Task 581d3af9-7e47-4e2b-9cab-6df33ae22e4c is in state STARTED 2026-01-17 01:14:56.297468 | orchestrator | 2026-01-17 01:14:56 | INFO  | Wait 1 second(s) until the next check 2026-01-17 01:14:59.335553 | orchestrator | 2026-01-17 01:14:59 | INFO  | Task ded2d698-a8bd-4275-a3c9-8f83c90a5e3a is in state STARTED 2026-01-17 01:14:59.335843 | orchestrator | 2026-01-17 01:14:59 | INFO  | Task 581d3af9-7e47-4e2b-9cab-6df33ae22e4c is in state STARTED 2026-01-17 01:14:59.335867 | orchestrator | 2026-01-17 01:14:59 | INFO  | Wait 1 second(s) until the next check 2026-01-17 01:15:02.388754 | orchestrator | 2026-01-17 01:15:02 | INFO  | Task ded2d698-a8bd-4275-a3c9-8f83c90a5e3a is in state STARTED 2026-01-17 01:15:02.390533 | orchestrator | 2026-01-17 01:15:02 | INFO  | Task 581d3af9-7e47-4e2b-9cab-6df33ae22e4c is in state STARTED 2026-01-17 01:15:02.390740 | orchestrator | 2026-01-17 01:15:02 | INFO  | Wait 1 second(s) until the next check 2026-01-17 01:15:05.430876 | orchestrator | 2026-01-17 01:15:05 | INFO  | Task ded2d698-a8bd-4275-a3c9-8f83c90a5e3a is in state STARTED 2026-01-17 01:15:05.432919 | orchestrator | 2026-01-17 01:15:05 | INFO  | Task 581d3af9-7e47-4e2b-9cab-6df33ae22e4c is in state STARTED 2026-01-17 01:15:05.433856 | orchestrator | 2026-01-17 01:15:05 | INFO  | Wait 1 second(s) until the next check 2026-01-17 01:15:08.475960 | orchestrator | 2026-01-17 01:15:08 | INFO  | Task ded2d698-a8bd-4275-a3c9-8f83c90a5e3a is in state STARTED 2026-01-17 01:15:08.479082 | orchestrator | 2026-01-17 01:15:08 | INFO  | Task 581d3af9-7e47-4e2b-9cab-6df33ae22e4c is in state STARTED 2026-01-17 01:15:08.479132 | orchestrator | 2026-01-17 01:15:08 | INFO  | Wait 1 second(s) until the next check 2026-01-17 01:15:11.522763 | orchestrator | 2026-01-17 01:15:11 | INFO  | Task ded2d698-a8bd-4275-a3c9-8f83c90a5e3a is in state STARTED 2026-01-17 01:15:11.525622 | orchestrator | 2026-01-17 01:15:11 | INFO  | Task 581d3af9-7e47-4e2b-9cab-6df33ae22e4c is in state STARTED 2026-01-17 01:15:11.525687 | orchestrator | 2026-01-17 01:15:11 | INFO  | Wait 1 second(s) until the next check 2026-01-17 01:15:14.566629 | orchestrator | 2026-01-17 01:15:14 | INFO  | Task ded2d698-a8bd-4275-a3c9-8f83c90a5e3a is in state STARTED 2026-01-17 01:15:14.567930 | orchestrator | 2026-01-17 01:15:14 | INFO  | Task 581d3af9-7e47-4e2b-9cab-6df33ae22e4c is in state STARTED 2026-01-17 01:15:14.567972 | orchestrator | 2026-01-17 01:15:14 | INFO  | Wait 1 second(s) until the next check 2026-01-17 01:15:17.616799 | orchestrator | 2026-01-17 01:15:17 | INFO  | Task ded2d698-a8bd-4275-a3c9-8f83c90a5e3a is in state STARTED 2026-01-17 01:15:17.618243 | orchestrator | 2026-01-17 01:15:17 | INFO  | Task 581d3af9-7e47-4e2b-9cab-6df33ae22e4c is in state STARTED 2026-01-17 01:15:17.618292 | orchestrator | 2026-01-17 01:15:17 | INFO  | Wait 1 second(s) until the next check 2026-01-17 01:15:20.654763 | orchestrator | 2026-01-17 01:15:20 | INFO  | Task ded2d698-a8bd-4275-a3c9-8f83c90a5e3a is in state STARTED 2026-01-17 01:15:20.655155 | orchestrator | 2026-01-17 01:15:20 | INFO  | Task 581d3af9-7e47-4e2b-9cab-6df33ae22e4c is in state STARTED 2026-01-17 01:15:20.655275 | orchestrator | 2026-01-17 01:15:20 | INFO  | Wait 1 second(s) until the next check 2026-01-17 01:15:23.695976 | orchestrator | 2026-01-17 01:15:23 | INFO  | Task ded2d698-a8bd-4275-a3c9-8f83c90a5e3a is in state STARTED 2026-01-17 01:15:23.696120 | orchestrator | 2026-01-17 01:15:23 | INFO  | Task 581d3af9-7e47-4e2b-9cab-6df33ae22e4c is in state STARTED 2026-01-17 01:15:23.696144 | orchestrator | 2026-01-17 01:15:23 | INFO  | Wait 1 second(s) until the next check 2026-01-17 01:15:26.721348 | orchestrator | 2026-01-17 01:15:26 | INFO  | Task ded2d698-a8bd-4275-a3c9-8f83c90a5e3a is in state STARTED 2026-01-17 01:15:26.721512 | orchestrator | 2026-01-17 01:15:26 | INFO  | Task 581d3af9-7e47-4e2b-9cab-6df33ae22e4c is in state STARTED 2026-01-17 01:15:26.721907 | orchestrator | 2026-01-17 01:15:26 | INFO  | Wait 1 second(s) until the next check 2026-01-17 01:15:29.780336 | orchestrator | 2026-01-17 01:15:29 | INFO  | Task ded2d698-a8bd-4275-a3c9-8f83c90a5e3a is in state STARTED 2026-01-17 01:15:29.782832 | orchestrator | 2026-01-17 01:15:29 | INFO  | Task 581d3af9-7e47-4e2b-9cab-6df33ae22e4c is in state STARTED 2026-01-17 01:15:29.782912 | orchestrator | 2026-01-17 01:15:29 | INFO  | Wait 1 second(s) until the next check 2026-01-17 01:15:32.824587 | orchestrator | 2026-01-17 01:15:32 | INFO  | Task ded2d698-a8bd-4275-a3c9-8f83c90a5e3a is in state STARTED 2026-01-17 01:15:32.825237 | orchestrator | 2026-01-17 01:15:32 | INFO  | Task 581d3af9-7e47-4e2b-9cab-6df33ae22e4c is in state STARTED 2026-01-17 01:15:32.825307 | orchestrator | 2026-01-17 01:15:32 | INFO  | Wait 1 second(s) until the next check 2026-01-17 01:15:35.865397 | orchestrator | 2026-01-17 01:15:35 | INFO  | Task ded2d698-a8bd-4275-a3c9-8f83c90a5e3a is in state STARTED 2026-01-17 01:15:35.866619 | orchestrator | 2026-01-17 01:15:35 | INFO  | Task 581d3af9-7e47-4e2b-9cab-6df33ae22e4c is in state STARTED 2026-01-17 01:15:35.866656 | orchestrator | 2026-01-17 01:15:35 | INFO  | Wait 1 second(s) until the next check 2026-01-17 01:15:38.917618 | orchestrator | 2026-01-17 01:15:38 | INFO  | Task ded2d698-a8bd-4275-a3c9-8f83c90a5e3a is in state STARTED 2026-01-17 01:15:38.918877 | orchestrator | 2026-01-17 01:15:38 | INFO  | Task 581d3af9-7e47-4e2b-9cab-6df33ae22e4c is in state STARTED 2026-01-17 01:15:38.919359 | orchestrator | 2026-01-17 01:15:38 | INFO  | Wait 1 second(s) until the next check 2026-01-17 01:15:41.968931 | orchestrator | 2026-01-17 01:15:41 | INFO  | Task ded2d698-a8bd-4275-a3c9-8f83c90a5e3a is in state STARTED 2026-01-17 01:15:41.971589 | orchestrator | 2026-01-17 01:15:41 | INFO  | Task 581d3af9-7e47-4e2b-9cab-6df33ae22e4c is in state STARTED 2026-01-17 01:15:41.971631 | orchestrator | 2026-01-17 01:15:41 | INFO  | Wait 1 second(s) until the next check 2026-01-17 01:15:45.020068 | orchestrator | 2026-01-17 01:15:45 | INFO  | Task ded2d698-a8bd-4275-a3c9-8f83c90a5e3a is in state STARTED 2026-01-17 01:15:45.022398 | orchestrator | 2026-01-17 01:15:45 | INFO  | Task 581d3af9-7e47-4e2b-9cab-6df33ae22e4c is in state STARTED 2026-01-17 01:15:45.022564 | orchestrator | 2026-01-17 01:15:45 | INFO  | Wait 1 second(s) until the next check 2026-01-17 01:15:48.081531 | orchestrator | 2026-01-17 01:15:48 | INFO  | Task ded2d698-a8bd-4275-a3c9-8f83c90a5e3a is in state STARTED 2026-01-17 01:15:48.084115 | orchestrator | 2026-01-17 01:15:48 | INFO  | Task 581d3af9-7e47-4e2b-9cab-6df33ae22e4c is in state STARTED 2026-01-17 01:15:48.084170 | orchestrator | 2026-01-17 01:15:48 | INFO  | Wait 1 second(s) until the next check 2026-01-17 01:15:51.141578 | orchestrator | 2026-01-17 01:15:51 | INFO  | Task ded2d698-a8bd-4275-a3c9-8f83c90a5e3a is in state STARTED 2026-01-17 01:15:51.141638 | orchestrator | 2026-01-17 01:15:51 | INFO  | Task 581d3af9-7e47-4e2b-9cab-6df33ae22e4c is in state STARTED 2026-01-17 01:15:51.141646 | orchestrator | 2026-01-17 01:15:51 | INFO  | Wait 1 second(s) until the next check 2026-01-17 01:15:54.192567 | orchestrator | 2026-01-17 01:15:54 | INFO  | Task ded2d698-a8bd-4275-a3c9-8f83c90a5e3a is in state STARTED 2026-01-17 01:15:54.195616 | orchestrator | 2026-01-17 01:15:54 | INFO  | Task 581d3af9-7e47-4e2b-9cab-6df33ae22e4c is in state STARTED 2026-01-17 01:15:54.195674 | orchestrator | 2026-01-17 01:15:54 | INFO  | Wait 1 second(s) until the next check 2026-01-17 01:15:57.235676 | orchestrator | 2026-01-17 01:15:57 | INFO  | Task ded2d698-a8bd-4275-a3c9-8f83c90a5e3a is in state STARTED 2026-01-17 01:15:57.235726 | orchestrator | 2026-01-17 01:15:57 | INFO  | Task 581d3af9-7e47-4e2b-9cab-6df33ae22e4c is in state STARTED 2026-01-17 01:15:57.235731 | orchestrator | 2026-01-17 01:15:57 | INFO  | Wait 1 second(s) until the next check 2026-01-17 01:16:00.282580 | orchestrator | 2026-01-17 01:16:00 | INFO  | Task ded2d698-a8bd-4275-a3c9-8f83c90a5e3a is in state STARTED 2026-01-17 01:16:00.284476 | orchestrator | 2026-01-17 01:16:00 | INFO  | Task 581d3af9-7e47-4e2b-9cab-6df33ae22e4c is in state STARTED 2026-01-17 01:16:00.284547 | orchestrator | 2026-01-17 01:16:00 | INFO  | Wait 1 second(s) until the next check 2026-01-17 01:16:03.338892 | orchestrator | 2026-01-17 01:16:03 | INFO  | Task ded2d698-a8bd-4275-a3c9-8f83c90a5e3a is in state STARTED 2026-01-17 01:16:03.340502 | orchestrator | 2026-01-17 01:16:03 | INFO  | Task 581d3af9-7e47-4e2b-9cab-6df33ae22e4c is in state STARTED 2026-01-17 01:16:03.340546 | orchestrator | 2026-01-17 01:16:03 | INFO  | Wait 1 second(s) until the next check 2026-01-17 01:16:06.404628 | orchestrator | 2026-01-17 01:16:06 | INFO  | Task ded2d698-a8bd-4275-a3c9-8f83c90a5e3a is in state STARTED 2026-01-17 01:16:06.406689 | orchestrator | 2026-01-17 01:16:06 | INFO  | Task 581d3af9-7e47-4e2b-9cab-6df33ae22e4c is in state STARTED 2026-01-17 01:16:06.406736 | orchestrator | 2026-01-17 01:16:06 | INFO  | Wait 1 second(s) until the next check 2026-01-17 01:16:09.459088 | orchestrator | 2026-01-17 01:16:09 | INFO  | Task ded2d698-a8bd-4275-a3c9-8f83c90a5e3a is in state STARTED 2026-01-17 01:16:09.462122 | orchestrator | 2026-01-17 01:16:09 | INFO  | Task 581d3af9-7e47-4e2b-9cab-6df33ae22e4c is in state STARTED 2026-01-17 01:16:09.462154 | orchestrator | 2026-01-17 01:16:09 | INFO  | Wait 1 second(s) until the next check 2026-01-17 01:16:12.521483 | orchestrator | 2026-01-17 01:16:12 | INFO  | Task ded2d698-a8bd-4275-a3c9-8f83c90a5e3a is in state STARTED 2026-01-17 01:16:12.525948 | orchestrator | 2026-01-17 01:16:12 | INFO  | Task 581d3af9-7e47-4e2b-9cab-6df33ae22e4c is in state SUCCESS 2026-01-17 01:16:12.528032 | orchestrator | 2026-01-17 01:16:12.528065 | orchestrator | 2026-01-17 01:16:12.528071 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-17 01:16:12.528076 | orchestrator | 2026-01-17 01:16:12.528080 | orchestrator | TASK [Group hosts based on OpenStack release] ********************************** 2026-01-17 01:16:12.528086 | orchestrator | Saturday 17 January 2026 01:07:36 +0000 (0:00:00.283) 0:00:00.283 ****** 2026-01-17 01:16:12.528093 | orchestrator | changed: [testbed-manager] 2026-01-17 01:16:12.528102 | orchestrator | changed: [testbed-node-0] 2026-01-17 01:16:12.528109 | orchestrator | changed: [testbed-node-1] 2026-01-17 01:16:12.528115 | orchestrator | changed: [testbed-node-2] 2026-01-17 01:16:12.528122 | orchestrator | changed: [testbed-node-3] 2026-01-17 01:16:12.528128 | orchestrator | changed: [testbed-node-4] 2026-01-17 01:16:12.528135 | orchestrator | changed: [testbed-node-5] 2026-01-17 01:16:12.528141 | orchestrator | 2026-01-17 01:16:12.528148 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-01-17 01:16:12.528153 | orchestrator | Saturday 17 January 2026 01:07:37 +0000 (0:00:00.876) 0:00:01.160 ****** 2026-01-17 01:16:12.528157 | orchestrator | changed: [testbed-manager] 2026-01-17 01:16:12.528161 | orchestrator | changed: [testbed-node-0] 2026-01-17 01:16:12.528164 | orchestrator | changed: [testbed-node-1] 2026-01-17 01:16:12.528168 | orchestrator | changed: [testbed-node-2] 2026-01-17 01:16:12.528172 | orchestrator | changed: [testbed-node-3] 2026-01-17 01:16:12.528188 | orchestrator | changed: [testbed-node-4] 2026-01-17 01:16:12.528193 | orchestrator | changed: [testbed-node-5] 2026-01-17 01:16:12.528196 | orchestrator | 2026-01-17 01:16:12.528200 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-17 01:16:12.528204 | orchestrator | Saturday 17 January 2026 01:07:37 +0000 (0:00:00.606) 0:00:01.766 ****** 2026-01-17 01:16:12.528208 | orchestrator | changed: [testbed-manager] => (item=enable_nova_True) 2026-01-17 01:16:12.528212 | orchestrator | changed: [testbed-node-0] => (item=enable_nova_True) 2026-01-17 01:16:12.528216 | orchestrator | changed: [testbed-node-1] => (item=enable_nova_True) 2026-01-17 01:16:12.528219 | orchestrator | changed: [testbed-node-2] => (item=enable_nova_True) 2026-01-17 01:16:12.528223 | orchestrator | changed: [testbed-node-3] => (item=enable_nova_True) 2026-01-17 01:16:12.528227 | orchestrator | changed: [testbed-node-4] => (item=enable_nova_True) 2026-01-17 01:16:12.528231 | orchestrator | changed: [testbed-node-5] => (item=enable_nova_True) 2026-01-17 01:16:12.528235 | orchestrator | 2026-01-17 01:16:12.528238 | orchestrator | PLAY [Bootstrap nova API databases] ******************************************** 2026-01-17 01:16:12.528242 | orchestrator | 2026-01-17 01:16:12.528246 | orchestrator | TASK [Bootstrap deploy] ******************************************************** 2026-01-17 01:16:12.528249 | orchestrator | Saturday 17 January 2026 01:07:38 +0000 (0:00:00.984) 0:00:02.751 ****** 2026-01-17 01:16:12.528253 | orchestrator | included: nova for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-17 01:16:12.528257 | orchestrator | 2026-01-17 01:16:12.528261 | orchestrator | TASK [nova : Creating Nova databases] ****************************************** 2026-01-17 01:16:12.528264 | orchestrator | Saturday 17 January 2026 01:07:39 +0000 (0:00:00.764) 0:00:03.515 ****** 2026-01-17 01:16:12.528269 | orchestrator | changed: [testbed-node-0] => (item=nova_cell0) 2026-01-17 01:16:12.528273 | orchestrator | changed: [testbed-node-0] => (item=nova_api) 2026-01-17 01:16:12.528305 | orchestrator | 2026-01-17 01:16:12.528310 | orchestrator | TASK [nova : Creating Nova databases user and setting permissions] ************* 2026-01-17 01:16:12.528314 | orchestrator | Saturday 17 January 2026 01:07:43 +0000 (0:00:03.756) 0:00:07.272 ****** 2026-01-17 01:16:12.528318 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-01-17 01:16:12.528322 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-01-17 01:16:12.528325 | orchestrator | changed: [testbed-node-0] 2026-01-17 01:16:12.528329 | orchestrator | 2026-01-17 01:16:12.528333 | orchestrator | TASK [nova : Ensuring config directories exist] ******************************** 2026-01-17 01:16:12.528337 | orchestrator | Saturday 17 January 2026 01:07:47 +0000 (0:00:03.972) 0:00:11.244 ****** 2026-01-17 01:16:12.528340 | orchestrator | changed: [testbed-node-0] 2026-01-17 01:16:12.528344 | orchestrator | 2026-01-17 01:16:12.528348 | orchestrator | TASK [nova : Copying over config.json files for nova-api-bootstrap] ************ 2026-01-17 01:16:12.528351 | orchestrator | Saturday 17 January 2026 01:07:48 +0000 (0:00:01.071) 0:00:12.316 ****** 2026-01-17 01:16:12.528355 | orchestrator | changed: [testbed-node-0] 2026-01-17 01:16:12.528359 | orchestrator | 2026-01-17 01:16:12.528363 | orchestrator | TASK [nova : Copying over nova.conf for nova-api-bootstrap] ******************** 2026-01-17 01:16:12.528366 | orchestrator | Saturday 17 January 2026 01:07:50 +0000 (0:00:01.901) 0:00:14.218 ****** 2026-01-17 01:16:12.528370 | orchestrator | changed: [testbed-node-0] 2026-01-17 01:16:12.528374 | orchestrator | 2026-01-17 01:16:12.528377 | orchestrator | TASK [nova : include_tasks] **************************************************** 2026-01-17 01:16:12.528381 | orchestrator | Saturday 17 January 2026 01:07:54 +0000 (0:00:04.271) 0:00:18.490 ****** 2026-01-17 01:16:12.528385 | orchestrator | skipping: [testbed-node-0] 2026-01-17 01:16:12.528389 | orchestrator | skipping: [testbed-node-1] 2026-01-17 01:16:12.528392 | orchestrator | skipping: [testbed-node-2] 2026-01-17 01:16:12.528396 | orchestrator | 2026-01-17 01:16:12.528400 | orchestrator | TASK [nova : Running Nova API bootstrap container] ***************************** 2026-01-17 01:16:12.528403 | orchestrator | Saturday 17 January 2026 01:07:55 +0000 (0:00:00.841) 0:00:19.332 ****** 2026-01-17 01:16:12.528407 | orchestrator | ok: [testbed-node-0] 2026-01-17 01:16:12.528414 | orchestrator | 2026-01-17 01:16:12.528418 | orchestrator | TASK [nova : Create cell0 mappings] ******************************************** 2026-01-17 01:16:12.528446 | orchestrator | Saturday 17 January 2026 01:08:27 +0000 (0:00:32.064) 0:00:51.396 ****** 2026-01-17 01:16:12.528451 | orchestrator | changed: [testbed-node-0] 2026-01-17 01:16:12.528455 | orchestrator | 2026-01-17 01:16:12.528458 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2026-01-17 01:16:12.528468 | orchestrator | Saturday 17 January 2026 01:08:45 +0000 (0:00:17.713) 0:01:09.110 ****** 2026-01-17 01:16:12.528472 | orchestrator | ok: [testbed-node-0] 2026-01-17 01:16:12.528476 | orchestrator | 2026-01-17 01:16:12.528480 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2026-01-17 01:16:12.528483 | orchestrator | Saturday 17 January 2026 01:08:58 +0000 (0:00:13.215) 0:01:22.325 ****** 2026-01-17 01:16:12.528494 | orchestrator | ok: [testbed-node-0] 2026-01-17 01:16:12.528498 | orchestrator | 2026-01-17 01:16:12.528502 | orchestrator | TASK [nova : Update cell0 mappings] ******************************************** 2026-01-17 01:16:12.528506 | orchestrator | Saturday 17 January 2026 01:08:59 +0000 (0:00:01.310) 0:01:23.635 ****** 2026-01-17 01:16:12.528510 | orchestrator | skipping: [testbed-node-0] 2026-01-17 01:16:12.528513 | orchestrator | 2026-01-17 01:16:12.528517 | orchestrator | TASK [nova : include_tasks] **************************************************** 2026-01-17 01:16:12.528521 | orchestrator | Saturday 17 January 2026 01:09:00 +0000 (0:00:00.778) 0:01:24.413 ****** 2026-01-17 01:16:12.528525 | orchestrator | included: /ansible/roles/nova/tasks/bootstrap_service.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-17 01:16:12.528528 | orchestrator | 2026-01-17 01:16:12.528532 | orchestrator | TASK [nova : Running Nova API bootstrap container] ***************************** 2026-01-17 01:16:12.528536 | orchestrator | Saturday 17 January 2026 01:09:01 +0000 (0:00:00.885) 0:01:25.299 ****** 2026-01-17 01:16:12.528540 | orchestrator | ok: [testbed-node-0] 2026-01-17 01:16:12.528543 | orchestrator | 2026-01-17 01:16:12.528547 | orchestrator | TASK [Bootstrap upgrade] ******************************************************* 2026-01-17 01:16:12.528551 | orchestrator | Saturday 17 January 2026 01:09:20 +0000 (0:00:19.437) 0:01:44.737 ****** 2026-01-17 01:16:12.528555 | orchestrator | skipping: [testbed-node-0] 2026-01-17 01:16:12.528558 | orchestrator | skipping: [testbed-node-1] 2026-01-17 01:16:12.528562 | orchestrator | skipping: [testbed-node-2] 2026-01-17 01:16:12.528566 | orchestrator | 2026-01-17 01:16:12.528594 | orchestrator | PLAY [Bootstrap nova cell databases] ******************************************* 2026-01-17 01:16:12.528598 | orchestrator | 2026-01-17 01:16:12.528602 | orchestrator | TASK [Bootstrap deploy] ******************************************************** 2026-01-17 01:16:12.528606 | orchestrator | Saturday 17 January 2026 01:09:21 +0000 (0:00:00.398) 0:01:45.136 ****** 2026-01-17 01:16:12.528609 | orchestrator | included: nova-cell for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-17 01:16:12.528613 | orchestrator | 2026-01-17 01:16:12.528617 | orchestrator | TASK [nova-cell : Creating Nova cell database] ********************************* 2026-01-17 01:16:12.528621 | orchestrator | Saturday 17 January 2026 01:09:21 +0000 (0:00:00.699) 0:01:45.835 ****** 2026-01-17 01:16:12.528626 | orchestrator | skipping: [testbed-node-1] 2026-01-17 01:16:12.528632 | orchestrator | skipping: [testbed-node-2] 2026-01-17 01:16:12.528641 | orchestrator | changed: [testbed-node-0] 2026-01-17 01:16:12.528647 | orchestrator | 2026-01-17 01:16:12.528654 | orchestrator | TASK [nova-cell : Creating Nova cell database user and setting permissions] **** 2026-01-17 01:16:12.528660 | orchestrator | Saturday 17 January 2026 01:09:24 +0000 (0:00:02.153) 0:01:47.989 ****** 2026-01-17 01:16:12.528668 | orchestrator | skipping: [testbed-node-1] 2026-01-17 01:16:12.528673 | orchestrator | skipping: [testbed-node-2] 2026-01-17 01:16:12.528679 | orchestrator | changed: [testbed-node-0] 2026-01-17 01:16:12.528686 | orchestrator | 2026-01-17 01:16:12.528692 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ vhosts exist] ****************** 2026-01-17 01:16:12.528698 | orchestrator | Saturday 17 January 2026 01:09:26 +0000 (0:00:02.069) 0:01:50.058 ****** 2026-01-17 01:16:12.528713 | orchestrator | skipping: [testbed-node-0] 2026-01-17 01:16:12.528720 | orchestrator | skipping: [testbed-node-1] 2026-01-17 01:16:12.528726 | orchestrator | skipping: [testbed-node-2] 2026-01-17 01:16:12.528732 | orchestrator | 2026-01-17 01:16:12.528739 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ users exist] ******************* 2026-01-17 01:16:12.528745 | orchestrator | Saturday 17 January 2026 01:09:26 +0000 (0:00:00.393) 0:01:50.452 ****** 2026-01-17 01:16:12.528752 | orchestrator | skipping: [testbed-node-1] => (item=None)  2026-01-17 01:16:12.528758 | orchestrator | skipping: [testbed-node-1] 2026-01-17 01:16:12.528765 | orchestrator | skipping: [testbed-node-2] => (item=None)  2026-01-17 01:16:12.528771 | orchestrator | skipping: [testbed-node-2] 2026-01-17 01:16:12.528778 | orchestrator | ok: [testbed-node-0] => (item=None) 2026-01-17 01:16:12.528784 | orchestrator | ok: [testbed-node-0 -> {{ service_rabbitmq_delegate_host }}] 2026-01-17 01:16:12.528790 | orchestrator | 2026-01-17 01:16:12.528796 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ vhosts exist] ****************** 2026-01-17 01:16:12.528802 | orchestrator | Saturday 17 January 2026 01:09:34 +0000 (0:00:08.351) 0:01:58.803 ****** 2026-01-17 01:16:12.528809 | orchestrator | skipping: [testbed-node-0] 2026-01-17 01:16:12.528816 | orchestrator | skipping: [testbed-node-1] 2026-01-17 01:16:12.528822 | orchestrator | skipping: [testbed-node-2] 2026-01-17 01:16:12.528828 | orchestrator | 2026-01-17 01:16:12.528836 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ users exist] ******************* 2026-01-17 01:16:12.528841 | orchestrator | Saturday 17 January 2026 01:09:35 +0000 (0:00:00.371) 0:01:59.175 ****** 2026-01-17 01:16:12.528848 | orchestrator | skipping: [testbed-node-0] => (item=None)  2026-01-17 01:16:12.528854 | orchestrator | skipping: [testbed-node-0] 2026-01-17 01:16:12.528861 | orchestrator | skipping: [testbed-node-1] => (item=None)  2026-01-17 01:16:12.528867 | orchestrator | skipping: [testbed-node-1] 2026-01-17 01:16:12.528873 | orchestrator | skipping: [testbed-node-2] => (item=None)  2026-01-17 01:16:12.528879 | orchestrator | skipping: [testbed-node-2] 2026-01-17 01:16:12.528886 | orchestrator | 2026-01-17 01:16:12.528892 | orchestrator | TASK [nova-cell : Ensuring config directories exist] *************************** 2026-01-17 01:16:12.528899 | orchestrator | Saturday 17 January 2026 01:09:36 +0000 (0:00:00.861) 0:02:00.037 ****** 2026-01-17 01:16:12.528905 | orchestrator | skipping: [testbed-node-1] 2026-01-17 01:16:12.528911 | orchestrator | skipping: [testbed-node-2] 2026-01-17 01:16:12.528918 | orchestrator | changed: [testbed-node-0] 2026-01-17 01:16:12.528925 | orchestrator | 2026-01-17 01:16:12.528931 | orchestrator | TASK [nova-cell : Copying over config.json files for nova-cell-bootstrap] ****** 2026-01-17 01:16:12.528937 | orchestrator | Saturday 17 January 2026 01:09:37 +0000 (0:00:00.939) 0:02:00.976 ****** 2026-01-17 01:16:12.528944 | orchestrator | skipping: [testbed-node-1] 2026-01-17 01:16:12.528950 | orchestrator | skipping: [testbed-node-2] 2026-01-17 01:16:12.528957 | orchestrator | changed: [testbed-node-0] 2026-01-17 01:16:12.528963 | orchestrator | 2026-01-17 01:16:12.528970 | orchestrator | TASK [nova-cell : Copying over nova.conf for nova-cell-bootstrap] ************** 2026-01-17 01:16:12.528976 | orchestrator | Saturday 17 January 2026 01:09:37 +0000 (0:00:00.966) 0:02:01.943 ****** 2026-01-17 01:16:12.528983 | orchestrator | skipping: [testbed-node-1] 2026-01-17 01:16:12.528990 | orchestrator | skipping: [testbed-node-2] 2026-01-17 01:16:12.529002 | orchestrator | changed: [testbed-node-0] 2026-01-17 01:16:12.529009 | orchestrator | 2026-01-17 01:16:12.529015 | orchestrator | TASK [nova-cell : Running Nova cell bootstrap container] *********************** 2026-01-17 01:16:12.529022 | orchestrator | Saturday 17 January 2026 01:09:39 +0000 (0:00:02.007) 0:02:03.950 ****** 2026-01-17 01:16:12.529028 | orchestrator | skipping: [testbed-node-1] 2026-01-17 01:16:12.529035 | orchestrator | skipping: [testbed-node-2] 2026-01-17 01:16:12.529041 | orchestrator | ok: [testbed-node-0] 2026-01-17 01:16:12.529048 | orchestrator | 2026-01-17 01:16:12.529054 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2026-01-17 01:16:12.529061 | orchestrator | Saturday 17 January 2026 01:10:01 +0000 (0:00:21.396) 0:02:25.346 ****** 2026-01-17 01:16:12.529071 | orchestrator | skipping: [testbed-node-1] 2026-01-17 01:16:12.529078 | orchestrator | skipping: [testbed-node-2] 2026-01-17 01:16:12.529085 | orchestrator | ok: [testbed-node-0] 2026-01-17 01:16:12.529091 | orchestrator | 2026-01-17 01:16:12.529098 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2026-01-17 01:16:12.529104 | orchestrator | Saturday 17 January 2026 01:10:15 +0000 (0:00:13.681) 0:02:39.028 ****** 2026-01-17 01:16:12.529110 | orchestrator | ok: [testbed-node-0] 2026-01-17 01:16:12.529116 | orchestrator | skipping: [testbed-node-1] 2026-01-17 01:16:12.529124 | orchestrator | skipping: [testbed-node-2] 2026-01-17 01:16:12.529130 | orchestrator | 2026-01-17 01:16:12.529136 | orchestrator | TASK [nova-cell : Create cell] ************************************************* 2026-01-17 01:16:12.529142 | orchestrator | Saturday 17 January 2026 01:10:16 +0000 (0:00:00.944) 0:02:39.972 ****** 2026-01-17 01:16:12.529150 | orchestrator | skipping: [testbed-node-1] 2026-01-17 01:16:12.529157 | orchestrator | skipping: [testbed-node-2] 2026-01-17 01:16:12.529163 | orchestrator | changed: [testbed-node-0] 2026-01-17 01:16:12.529170 | orchestrator | 2026-01-17 01:16:12.529177 | orchestrator | TASK [nova-cell : Update cell] ************************************************* 2026-01-17 01:16:12.529184 | orchestrator | Saturday 17 January 2026 01:10:28 +0000 (0:00:12.215) 0:02:52.187 ****** 2026-01-17 01:16:12.529191 | orchestrator | skipping: [testbed-node-0] 2026-01-17 01:16:12.529198 | orchestrator | skipping: [testbed-node-1] 2026-01-17 01:16:12.529205 | orchestrator | skipping: [testbed-node-2] 2026-01-17 01:16:12.529212 | orchestrator | 2026-01-17 01:16:12.529219 | orchestrator | TASK [Bootstrap upgrade] ******************************************************* 2026-01-17 01:16:12.529226 | orchestrator | Saturday 17 January 2026 01:10:29 +0000 (0:00:01.122) 0:02:53.310 ****** 2026-01-17 01:16:12.529233 | orchestrator | skipping: [testbed-node-0] 2026-01-17 01:16:12.529241 | orchestrator | skipping: [testbed-node-1] 2026-01-17 01:16:12.529247 | orchestrator | skipping: [testbed-node-2] 2026-01-17 01:16:12.529254 | orchestrator | 2026-01-17 01:16:12.529261 | orchestrator | PLAY [Apply role nova] ********************************************************* 2026-01-17 01:16:12.529268 | orchestrator | 2026-01-17 01:16:12.529275 | orchestrator | TASK [nova : include_tasks] **************************************************** 2026-01-17 01:16:12.529472 | orchestrator | Saturday 17 January 2026 01:10:29 +0000 (0:00:00.586) 0:02:53.897 ****** 2026-01-17 01:16:12.529491 | orchestrator | included: /ansible/roles/nova/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-17 01:16:12.529501 | orchestrator | 2026-01-17 01:16:12.529510 | orchestrator | TASK [service-ks-register : nova | Creating services] ************************** 2026-01-17 01:16:12.529519 | orchestrator | Saturday 17 January 2026 01:10:30 +0000 (0:00:00.636) 0:02:54.533 ****** 2026-01-17 01:16:12.529528 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy (compute_legacy))  2026-01-17 01:16:12.529611 | orchestrator | changed: [testbed-node-0] => (item=nova (compute)) 2026-01-17 01:16:12.529627 | orchestrator | 2026-01-17 01:16:12.529635 | orchestrator | TASK [service-ks-register : nova | Creating endpoints] ************************* 2026-01-17 01:16:12.529644 | orchestrator | Saturday 17 January 2026 01:10:34 +0000 (0:00:03.680) 0:02:58.214 ****** 2026-01-17 01:16:12.529653 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy -> https://api-int.testbed.osism.xyz:8774/v2/%(tenant_id)s -> internal)  2026-01-17 01:16:12.529663 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy -> https://api.testbed.osism.xyz:8774/v2/%(tenant_id)s -> public)  2026-01-17 01:16:12.529672 | orchestrator | changed: [testbed-node-0] => (item=nova -> https://api-int.testbed.osism.xyz:8774/v2.1 -> internal) 2026-01-17 01:16:12.529678 | orchestrator | changed: [testbed-node-0] => (item=nova -> https://api.testbed.osism.xyz:8774/v2.1 -> public) 2026-01-17 01:16:12.529686 | orchestrator | 2026-01-17 01:16:12.529693 | orchestrator | TASK [service-ks-register : nova | Creating projects] ************************** 2026-01-17 01:16:12.529699 | orchestrator | Saturday 17 January 2026 01:10:40 +0000 (0:00:05.928) 0:03:04.142 ****** 2026-01-17 01:16:12.529713 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-01-17 01:16:12.529721 | orchestrator | 2026-01-17 01:16:12.529728 | orchestrator | TASK [service-ks-register : nova | Creating users] ***************************** 2026-01-17 01:16:12.529737 | orchestrator | Saturday 17 January 2026 01:10:43 +0000 (0:00:03.013) 0:03:07.155 ****** 2026-01-17 01:16:12.529744 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-01-17 01:16:12.529752 | orchestrator | changed: [testbed-node-0] => (item=nova -> service) 2026-01-17 01:16:12.529760 | orchestrator | 2026-01-17 01:16:12.529768 | orchestrator | TASK [service-ks-register : nova | Creating roles] ***************************** 2026-01-17 01:16:12.529777 | orchestrator | Saturday 17 January 2026 01:10:47 +0000 (0:00:03.930) 0:03:11.085 ****** 2026-01-17 01:16:12.529784 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-01-17 01:16:12.529793 | orchestrator | 2026-01-17 01:16:12.529801 | orchestrator | TASK [service-ks-register : nova | Granting user roles] ************************ 2026-01-17 01:16:12.529809 | orchestrator | Saturday 17 January 2026 01:10:50 +0000 (0:00:03.621) 0:03:14.706 ****** 2026-01-17 01:16:12.529820 | orchestrator | changed: [testbed-node-0] => (item=nova -> service -> admin) 2026-01-17 01:16:12.529828 | orchestrator | changed: [testbed-node-0] => (item=nova -> service -> service) 2026-01-17 01:16:12.529836 | orchestrator | 2026-01-17 01:16:12.529844 | orchestrator | TASK [nova : Ensuring config directories exist] ******************************** 2026-01-17 01:16:12.529861 | orchestrator | Saturday 17 January 2026 01:10:57 +0000 (0:00:07.081) 0:03:21.788 ****** 2026-01-17 01:16:12.529871 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-01-17 01:16:12.529881 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-01-17 01:16:12.529889 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-01-17 01:16:12.529910 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-01-17 01:16:12.529918 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-01-17 01:16:12.529925 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-01-17 01:16:12.529931 | orchestrator | 2026-01-17 01:16:12.529938 | orchestrator | TASK [nova : Check if policies shall be overwritten] *************************** 2026-01-17 01:16:12.529946 | orchestrator | Saturday 17 January 2026 01:10:59 +0000 (0:00:01.309) 0:03:23.097 ****** 2026-01-17 01:16:12.529952 | orchestrator | skipping: [testbed-node-0] 2026-01-17 01:16:12.529959 | orchestrator | 2026-01-17 01:16:12.529966 | orchestrator | TASK [nova : Set nova policy file] ********************************************* 2026-01-17 01:16:12.529973 | orchestrator | Saturday 17 January 2026 01:10:59 +0000 (0:00:00.163) 0:03:23.261 ****** 2026-01-17 01:16:12.529980 | orchestrator | skipping: [testbed-node-0] 2026-01-17 01:16:12.529986 | orchestrator | skipping: [testbed-node-1] 2026-01-17 01:16:12.529993 | orchestrator | skipping: [testbed-node-2] 2026-01-17 01:16:12.530000 | orchestrator | 2026-01-17 01:16:12.530007 | orchestrator | TASK [nova : Check for vendordata file] **************************************** 2026-01-17 01:16:12.530621 | orchestrator | Saturday 17 January 2026 01:10:59 +0000 (0:00:00.302) 0:03:23.563 ****** 2026-01-17 01:16:12.530641 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-01-17 01:16:12.530654 | orchestrator | 2026-01-17 01:16:12.530660 | orchestrator | TASK [nova : Set vendordata file path] ***************************************** 2026-01-17 01:16:12.530666 | orchestrator | Saturday 17 January 2026 01:11:00 +0000 (0:00:01.014) 0:03:24.578 ****** 2026-01-17 01:16:12.530673 | orchestrator | skipping: [testbed-node-0] 2026-01-17 01:16:12.530678 | orchestrator | skipping: [testbed-node-1] 2026-01-17 01:16:12.530684 | orchestrator | skipping: [testbed-node-2] 2026-01-17 01:16:12.530702 | orchestrator | 2026-01-17 01:16:12.530709 | orchestrator | TASK [nova : include_tasks] **************************************************** 2026-01-17 01:16:12.530715 | orchestrator | Saturday 17 January 2026 01:11:00 +0000 (0:00:00.315) 0:03:24.894 ****** 2026-01-17 01:16:12.530721 | orchestrator | included: /ansible/roles/nova/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-17 01:16:12.530727 | orchestrator | 2026-01-17 01:16:12.530733 | orchestrator | TASK [service-cert-copy : nova | Copying over extra CA certificates] *********** 2026-01-17 01:16:12.530739 | orchestrator | Saturday 17 January 2026 01:11:01 +0000 (0:00:00.575) 0:03:25.469 ****** 2026-01-17 01:16:12.530750 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-01-17 01:16:12.530765 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-01-17 01:16:12.530773 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-01-17 01:16:12.530792 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-01-17 01:16:12.530800 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-01-17 01:16:12.530812 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-01-17 01:16:12.530826 | orchestrator | 2026-01-17 01:16:12.530833 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS certificate] *** 2026-01-17 01:16:12.530839 | orchestrator | Saturday 17 January 2026 01:11:04 +0000 (0:00:03.075) 0:03:28.544 ****** 2026-01-17 01:16:12.530845 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-01-17 01:16:12.530856 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-01-17 01:16:12.530863 | orchestrator | skipping: [testbed-node-0] 2026-01-17 01:16:12.530869 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-01-17 01:16:12.530879 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-01-17 01:16:12.530886 | orchestrator | skipping: [testbed-node-1] 2026-01-17 01:16:12.530896 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-01-17 01:16:12.530903 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-01-17 01:16:12.530912 | orchestrator | skipping: [testbed-node-2] 2026-01-17 01:16:12.530918 | orchestrator | 2026-01-17 01:16:12.530925 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS key] ******** 2026-01-17 01:16:12.530931 | orchestrator | Saturday 17 January 2026 01:11:05 +0000 (0:00:00.611) 0:03:29.156 ****** 2026-01-17 01:16:12.530938 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-01-17 01:16:12.530945 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-01-17 01:16:12.530952 | orchestrator | skipping: [testbed-node-0] 2026-01-17 01:16:12.530965 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-01-17 01:16:12.530972 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-01-17 01:16:12.530982 | orchestrator | skipping: [testbed-node-1] 2026-01-17 01:16:12.530990 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-01-17 01:16:12.530996 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-01-17 01:16:12.531003 | orchestrator | skipping: [testbed-node-2] 2026-01-17 01:16:12.531009 | orchestrator | 2026-01-17 01:16:12.531015 | orchestrator | TASK [nova : Copying over config.json files for services] ********************** 2026-01-17 01:16:12.531022 | orchestrator | Saturday 17 January 2026 01:11:06 +0000 (0:00:00.854) 0:03:30.010 ****** 2026-01-17 01:16:12.531037 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-01-17 01:16:12.531044 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-01-17 01:16:12.531055 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-01-17 01:16:12.531063 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-01-17 01:16:12.531076 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-01-17 01:16:12.531083 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-01-17 01:16:12.531093 | orchestrator | 2026-01-17 01:16:12.531100 | orchestrator | TASK [nova : Copying over nova.conf] ******************************************* 2026-01-17 01:16:12.531107 | orchestrator | Saturday 17 January 2026 01:11:08 +0000 (0:00:02.550) 0:03:32.561 ****** 2026-01-17 01:16:12.531113 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-01-17 01:16:12.531120 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-01-17 01:16:12.531133 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-01-17 01:16:12.531144 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-01-17 01:16:12.531151 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-01-17 01:16:12.531158 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-01-17 01:16:12.531164 | orchestrator | 2026-01-17 01:16:12.531171 | orchestrator | TASK [nova : Copying over existing policy file] ******************************** 2026-01-17 01:16:12.531177 | orchestrator | Saturday 17 January 2026 01:11:14 +0000 (0:00:05.915) 0:03:38.476 ****** 2026-01-17 01:16:12.531184 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-01-17 01:16:12.531197 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-01-17 01:16:12.531204 | orchestrator | skipping: [testbed-node-0] 2026-01-17 01:16:12.531214 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-01-17 01:16:12.531221 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-01-17 01:16:12.531228 | orchestrator | skipping: [testbed-node-1] 2026-01-17 01:16:12.531235 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-01-17 01:16:12.531241 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-01-17 01:16:12.531248 | orchestrator | skipping: [testbed-node-2] 2026-01-17 01:16:12.531254 | orchestrator | 2026-01-17 01:16:12.531261 | orchestrator | TASK [nova : Copying over nova-api-wsgi.conf] ********************************** 2026-01-17 01:16:12.531267 | orchestrator | Saturday 17 January 2026 01:11:15 +0000 (0:00:00.646) 0:03:39.123 ****** 2026-01-17 01:16:12.531278 | orchestrator | changed: [testbed-node-0] 2026-01-17 01:16:12.531324 | orchestrator | changed: [testbed-node-1] 2026-01-17 01:16:12.531332 | orchestrator | changed: [testbed-node-2] 2026-01-17 01:16:12.531338 | orchestrator | 2026-01-17 01:16:12.531349 | orchestrator | TASK [nova : Copying over vendordata file] ************************************* 2026-01-17 01:16:12.531356 | orchestrator | Saturday 17 January 2026 01:11:16 +0000 (0:00:01.512) 0:03:40.636 ****** 2026-01-17 01:16:12.531363 | orchestrator | skipping: [testbed-node-0] 2026-01-17 01:16:12.531370 | orchestrator | skipping: [testbed-node-1] 2026-01-17 01:16:12.531376 | orchestrator | skipping: [testbed-node-2] 2026-01-17 01:16:12.531383 | orchestrator | 2026-01-17 01:16:12.531390 | orchestrator | TASK [nova : Check nova containers] ******************************************** 2026-01-17 01:16:12.531396 | orchestrator | Saturday 17 January 2026 01:11:17 +0000 (0:00:00.357) 0:03:40.994 ****** 2026-01-17 01:16:12.531403 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-01-17 01:16:12.531411 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-01-17 01:16:12.531425 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-01-17 01:16:12.531438 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-01-17 01:16:12.531445 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-01-17 01:16:12.531452 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-01-17 01:16:12.531458 | orchestrator | 2026-01-17 01:16:12.531464 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2026-01-17 01:16:12.531470 | orchestrator | Saturday 17 January 2026 01:11:19 +0000 (0:00:02.314) 0:03:43.308 ****** 2026-01-17 01:16:12.531476 | orchestrator | 2026-01-17 01:16:12.531483 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2026-01-17 01:16:12.531489 | orchestrator | Saturday 17 January 2026 01:11:19 +0000 (0:00:00.130) 0:03:43.439 ****** 2026-01-17 01:16:12.531495 | orchestrator | 2026-01-17 01:16:12.531501 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2026-01-17 01:16:12.531508 | orchestrator | Saturday 17 January 2026 01:11:19 +0000 (0:00:00.127) 0:03:43.567 ****** 2026-01-17 01:16:12.531514 | orchestrator | 2026-01-17 01:16:12.531520 | orchestrator | RUNNING HANDLER [nova : Restart nova-scheduler container] ********************** 2026-01-17 01:16:12.531526 | orchestrator | Saturday 17 January 2026 01:11:19 +0000 (0:00:00.132) 0:03:43.699 ****** 2026-01-17 01:16:12.531532 | orchestrator | changed: [testbed-node-0] 2026-01-17 01:16:12.531539 | orchestrator | changed: [testbed-node-1] 2026-01-17 01:16:12.531545 | orchestrator | changed: [testbed-node-2] 2026-01-17 01:16:12.531551 | orchestrator | 2026-01-17 01:16:12.531557 | orchestrator | RUNNING HANDLER [nova : Restart nova-api container] **************************** 2026-01-17 01:16:12.531564 | orchestrator | Saturday 17 January 2026 01:11:35 +0000 (0:00:15.460) 0:03:59.160 ****** 2026-01-17 01:16:12.531570 | orchestrator | changed: [testbed-node-0] 2026-01-17 01:16:12.531577 | orchestrator | changed: [testbed-node-2] 2026-01-17 01:16:12.531584 | orchestrator | changed: [testbed-node-1] 2026-01-17 01:16:12.531594 | orchestrator | 2026-01-17 01:16:12.531647 | orchestrator | PLAY [Apply role nova-cell] **************************************************** 2026-01-17 01:16:12.531668 | orchestrator | 2026-01-17 01:16:12.531676 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2026-01-17 01:16:12.531682 | orchestrator | Saturday 17 January 2026 01:11:40 +0000 (0:00:05.330) 0:04:04.490 ****** 2026-01-17 01:16:12.531689 | orchestrator | included: /ansible/roles/nova-cell/tasks/deploy.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-17 01:16:12.531695 | orchestrator | 2026-01-17 01:16:12.531702 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2026-01-17 01:16:12.531709 | orchestrator | Saturday 17 January 2026 01:11:41 +0000 (0:00:01.468) 0:04:05.958 ****** 2026-01-17 01:16:12.531715 | orchestrator | skipping: [testbed-node-3] 2026-01-17 01:16:12.531721 | orchestrator | skipping: [testbed-node-4] 2026-01-17 01:16:12.531728 | orchestrator | skipping: [testbed-node-5] 2026-01-17 01:16:12.531735 | orchestrator | skipping: [testbed-node-0] 2026-01-17 01:16:12.531742 | orchestrator | skipping: [testbed-node-1] 2026-01-17 01:16:12.531749 | orchestrator | skipping: [testbed-node-2] 2026-01-17 01:16:12.531755 | orchestrator | 2026-01-17 01:16:12.531762 | orchestrator | TASK [Load and persist br_netfilter module] ************************************ 2026-01-17 01:16:12.531768 | orchestrator | Saturday 17 January 2026 01:11:42 +0000 (0:00:00.811) 0:04:06.770 ****** 2026-01-17 01:16:12.531775 | orchestrator | skipping: [testbed-node-0] 2026-01-17 01:16:12.531781 | orchestrator | skipping: [testbed-node-1] 2026-01-17 01:16:12.531791 | orchestrator | skipping: [testbed-node-2] 2026-01-17 01:16:12.531798 | orchestrator | included: module-load for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-17 01:16:12.531804 | orchestrator | 2026-01-17 01:16:12.531811 | orchestrator | TASK [module-load : Load modules] ********************************************** 2026-01-17 01:16:12.531823 | orchestrator | Saturday 17 January 2026 01:11:44 +0000 (0:00:01.411) 0:04:08.181 ****** 2026-01-17 01:16:12.531830 | orchestrator | ok: [testbed-node-4] => (item=br_netfilter) 2026-01-17 01:16:12.531836 | orchestrator | ok: [testbed-node-3] => (item=br_netfilter) 2026-01-17 01:16:12.531842 | orchestrator | ok: [testbed-node-5] => (item=br_netfilter) 2026-01-17 01:16:12.531848 | orchestrator | 2026-01-17 01:16:12.531854 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2026-01-17 01:16:12.531861 | orchestrator | Saturday 17 January 2026 01:11:45 +0000 (0:00:00.955) 0:04:09.136 ****** 2026-01-17 01:16:12.531867 | orchestrator | changed: [testbed-node-3] => (item=br_netfilter) 2026-01-17 01:16:12.531874 | orchestrator | changed: [testbed-node-4] => (item=br_netfilter) 2026-01-17 01:16:12.531880 | orchestrator | changed: [testbed-node-5] => (item=br_netfilter) 2026-01-17 01:16:12.531886 | orchestrator | 2026-01-17 01:16:12.531892 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2026-01-17 01:16:12.531898 | orchestrator | Saturday 17 January 2026 01:11:47 +0000 (0:00:01.842) 0:04:10.979 ****** 2026-01-17 01:16:12.531904 | orchestrator | skipping: [testbed-node-3] => (item=br_netfilter)  2026-01-17 01:16:12.531910 | orchestrator | skipping: [testbed-node-3] 2026-01-17 01:16:12.531916 | orchestrator | skipping: [testbed-node-4] => (item=br_netfilter)  2026-01-17 01:16:12.531923 | orchestrator | skipping: [testbed-node-4] 2026-01-17 01:16:12.531929 | orchestrator | skipping: [testbed-node-5] => (item=br_netfilter)  2026-01-17 01:16:12.531935 | orchestrator | skipping: [testbed-node-5] 2026-01-17 01:16:12.531942 | orchestrator | 2026-01-17 01:16:12.531949 | orchestrator | TASK [nova-cell : Enable bridge-nf-call sysctl variables] ********************** 2026-01-17 01:16:12.531955 | orchestrator | Saturday 17 January 2026 01:11:47 +0000 (0:00:00.608) 0:04:11.588 ****** 2026-01-17 01:16:12.531961 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-iptables)  2026-01-17 01:16:12.531968 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-01-17 01:16:12.531974 | orchestrator | skipping: [testbed-node-0] 2026-01-17 01:16:12.531986 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-iptables)  2026-01-17 01:16:12.531993 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-01-17 01:16:12.531999 | orchestrator | changed: [testbed-node-4] => (item=net.bridge.bridge-nf-call-iptables) 2026-01-17 01:16:12.532006 | orchestrator | changed: [testbed-node-3] => (item=net.bridge.bridge-nf-call-iptables) 2026-01-17 01:16:12.532012 | orchestrator | skipping: [testbed-node-1] 2026-01-17 01:16:12.532018 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-iptables)  2026-01-17 01:16:12.532025 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-01-17 01:16:12.532031 | orchestrator | skipping: [testbed-node-2] 2026-01-17 01:16:12.532038 | orchestrator | changed: [testbed-node-5] => (item=net.bridge.bridge-nf-call-iptables) 2026-01-17 01:16:12.532044 | orchestrator | changed: [testbed-node-4] => (item=net.bridge.bridge-nf-call-ip6tables) 2026-01-17 01:16:12.532051 | orchestrator | changed: [testbed-node-3] => (item=net.bridge.bridge-nf-call-ip6tables) 2026-01-17 01:16:12.532057 | orchestrator | changed: [testbed-node-5] => (item=net.bridge.bridge-nf-call-ip6tables) 2026-01-17 01:16:12.532063 | orchestrator | 2026-01-17 01:16:12.532070 | orchestrator | TASK [nova-cell : Install udev kolla kvm rules] ******************************** 2026-01-17 01:16:12.532076 | orchestrator | Saturday 17 January 2026 01:11:48 +0000 (0:00:01.299) 0:04:12.888 ****** 2026-01-17 01:16:12.532082 | orchestrator | skipping: [testbed-node-0] 2026-01-17 01:16:12.532089 | orchestrator | skipping: [testbed-node-1] 2026-01-17 01:16:12.532095 | orchestrator | skipping: [testbed-node-2] 2026-01-17 01:16:12.532102 | orchestrator | changed: [testbed-node-3] 2026-01-17 01:16:12.532108 | orchestrator | changed: [testbed-node-4] 2026-01-17 01:16:12.532114 | orchestrator | changed: [testbed-node-5] 2026-01-17 01:16:12.532121 | orchestrator | 2026-01-17 01:16:12.532127 | orchestrator | TASK [nova-cell : Mask qemu-kvm service] *************************************** 2026-01-17 01:16:12.532133 | orchestrator | Saturday 17 January 2026 01:11:50 +0000 (0:00:01.256) 0:04:14.145 ****** 2026-01-17 01:16:12.532139 | orchestrator | skipping: [testbed-node-0] 2026-01-17 01:16:12.532145 | orchestrator | skipping: [testbed-node-1] 2026-01-17 01:16:12.532152 | orchestrator | skipping: [testbed-node-2] 2026-01-17 01:16:12.532158 | orchestrator | changed: [testbed-node-4] 2026-01-17 01:16:12.532165 | orchestrator | changed: [testbed-node-5] 2026-01-17 01:16:12.532171 | orchestrator | changed: [testbed-node-3] 2026-01-17 01:16:12.532177 | orchestrator | 2026-01-17 01:16:12.532184 | orchestrator | TASK [nova-cell : Ensuring config directories exist] *************************** 2026-01-17 01:16:12.532190 | orchestrator | Saturday 17 January 2026 01:11:52 +0000 (0:00:02.026) 0:04:16.171 ****** 2026-01-17 01:16:12.532203 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-01-17 01:16:12.532215 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-01-17 01:16:12.532227 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-01-17 01:16:12.532234 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-01-17 01:16:12.532240 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-01-17 01:16:12.532247 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-01-17 01:16:12.532255 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-01-17 01:16:12.532266 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-01-17 01:16:12.532276 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-01-17 01:16:12.532298 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-01-17 01:16:12.532305 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-01-17 01:16:12.532311 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-01-17 01:16:12.532318 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-01-17 01:16:12.532329 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-01-17 01:16:12.532343 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-01-17 01:16:12.532350 | orchestrator | 2026-01-17 01:16:12.532356 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2026-01-17 01:16:12.532363 | orchestrator | Saturday 17 January 2026 01:11:54 +0000 (0:00:02.153) 0:04:18.324 ****** 2026-01-17 01:16:12.532369 | orchestrator | included: /ansible/roles/nova-cell/tasks/copy-certs.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-17 01:16:12.532377 | orchestrator | 2026-01-17 01:16:12.532383 | orchestrator | TASK [service-cert-copy : nova | Copying over extra CA certificates] *********** 2026-01-17 01:16:12.532389 | orchestrator | Saturday 17 January 2026 01:11:55 +0000 (0:00:01.386) 0:04:19.710 ****** 2026-01-17 01:16:12.532396 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-01-17 01:16:12.532403 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-01-17 01:16:12.532416 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-01-17 01:16:12.532427 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-01-17 01:16:12.532434 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-01-17 01:16:12.532439 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-01-17 01:16:12.532445 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-01-17 01:16:12.532451 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-01-17 01:16:12.532457 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-01-17 01:16:12.532473 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-01-17 01:16:12.532480 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-01-17 01:16:12.532487 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-01-17 01:16:12.532494 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-01-17 01:16:12.532501 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-01-17 01:16:12.532508 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-01-17 01:16:12.532520 | orchestrator | 2026-01-17 01:16:12.532526 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS certificate] *** 2026-01-17 01:16:12.532533 | orchestrator | Saturday 17 January 2026 01:11:59 +0000 (0:00:03.939) 0:04:23.650 ****** 2026-01-17 01:16:12.532704 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-01-17 01:16:12.532718 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-01-17 01:16:12.532726 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-01-17 01:16:12.532733 | orchestrator | skipping: [testbed-node-3] 2026-01-17 01:16:12.532740 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-01-17 01:16:12.532746 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-01-17 01:16:12.532777 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-01-17 01:16:12.532785 | orchestrator | skipping: [testbed-node-5] 2026-01-17 01:16:12.532791 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-01-17 01:16:12.532798 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-01-17 01:16:12.532804 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-01-17 01:16:12.532810 | orchestrator | skipping: [testbed-node-4] 2026-01-17 01:16:12.532817 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-01-17 01:16:12.532828 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-01-17 01:16:12.532834 | orchestrator | skipping: [testbed-node-1] 2026-01-17 01:16:12.532862 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-01-17 01:16:12.532870 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-01-17 01:16:12.532877 | orchestrator | skipping: [testbed-node-0] 2026-01-17 01:16:12.532884 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-01-17 01:16:12.532891 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-01-17 01:16:12.532897 | orchestrator | skipping: [testbed-node-2] 2026-01-17 01:16:12.532904 | orchestrator | 2026-01-17 01:16:12.532911 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS key] ******** 2026-01-17 01:16:12.532917 | orchestrator | Saturday 17 January 2026 01:12:01 +0000 (0:00:01.812) 0:04:25.462 ****** 2026-01-17 01:16:12.532924 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-01-17 01:16:12.532935 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-01-17 01:16:12.532960 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-01-17 01:16:12.532968 | orchestrator | skipping: [testbed-node-4] 2026-01-17 01:16:12.532975 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-01-17 01:16:12.532982 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-01-17 01:16:12.532990 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-01-17 01:16:12.533001 | orchestrator | skipping: [testbed-node-3] 2026-01-17 01:16:12.533007 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-01-17 01:16:12.533017 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-01-17 01:16:12.533041 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-01-17 01:16:12.533049 | orchestrator | skipping: [testbed-node-5] 2026-01-17 01:16:12.533056 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-01-17 01:16:12.533063 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-01-17 01:16:12.533070 | orchestrator | skipping: [testbed-node-0] 2026-01-17 01:16:12.533077 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-01-17 01:16:12.533090 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-01-17 01:16:12.533096 | orchestrator | skipping: [testbed-node-1] 2026-01-17 01:16:12.533103 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-01-17 01:16:12.533129 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-01-17 01:16:12.533136 | orchestrator | skipping: [testbed-node-2] 2026-01-17 01:16:12.533143 | orchestrator | 2026-01-17 01:16:12.533149 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2026-01-17 01:16:12.533155 | orchestrator | Saturday 17 January 2026 01:12:03 +0000 (0:00:02.271) 0:04:27.734 ****** 2026-01-17 01:16:12.533162 | orchestrator | skipping: [testbed-node-0] 2026-01-17 01:16:12.533168 | orchestrator | skipping: [testbed-node-1] 2026-01-17 01:16:12.533174 | orchestrator | skipping: [testbed-node-2] 2026-01-17 01:16:12.533180 | orchestrator | included: /ansible/roles/nova-cell/tasks/external_ceph.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-17 01:16:12.533186 | orchestrator | 2026-01-17 01:16:12.533192 | orchestrator | TASK [nova-cell : Check nova keyring file] ************************************* 2026-01-17 01:16:12.533199 | orchestrator | Saturday 17 January 2026 01:12:04 +0000 (0:00:01.102) 0:04:28.837 ****** 2026-01-17 01:16:12.533206 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-01-17 01:16:12.533212 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-01-17 01:16:12.533219 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-01-17 01:16:12.533225 | orchestrator | 2026-01-17 01:16:12.533231 | orchestrator | TASK [nova-cell : Check cinder keyring file] *********************************** 2026-01-17 01:16:12.533238 | orchestrator | Saturday 17 January 2026 01:12:05 +0000 (0:00:00.997) 0:04:29.834 ****** 2026-01-17 01:16:12.533245 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-01-17 01:16:12.533251 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-01-17 01:16:12.533258 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-01-17 01:16:12.533264 | orchestrator | 2026-01-17 01:16:12.533271 | orchestrator | TASK [nova-cell : Extract nova key from file] ********************************** 2026-01-17 01:16:12.533278 | orchestrator | Saturday 17 January 2026 01:12:06 +0000 (0:00:00.943) 0:04:30.777 ****** 2026-01-17 01:16:12.533330 | orchestrator | ok: [testbed-node-3] 2026-01-17 01:16:12.533337 | orchestrator | ok: [testbed-node-4] 2026-01-17 01:16:12.533344 | orchestrator | ok: [testbed-node-5] 2026-01-17 01:16:12.533350 | orchestrator | 2026-01-17 01:16:12.533357 | orchestrator | TASK [nova-cell : Extract cinder key from file] ******************************** 2026-01-17 01:16:12.533363 | orchestrator | Saturday 17 January 2026 01:12:07 +0000 (0:00:00.525) 0:04:31.303 ****** 2026-01-17 01:16:12.533370 | orchestrator | ok: [testbed-node-3] 2026-01-17 01:16:12.533377 | orchestrator | ok: [testbed-node-4] 2026-01-17 01:16:12.533384 | orchestrator | ok: [testbed-node-5] 2026-01-17 01:16:12.533390 | orchestrator | 2026-01-17 01:16:12.533396 | orchestrator | TASK [nova-cell : Copy over ceph nova keyring file] **************************** 2026-01-17 01:16:12.533403 | orchestrator | Saturday 17 January 2026 01:12:08 +0000 (0:00:00.848) 0:04:32.152 ****** 2026-01-17 01:16:12.533409 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2026-01-17 01:16:12.533415 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2026-01-17 01:16:12.533422 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2026-01-17 01:16:12.533428 | orchestrator | 2026-01-17 01:16:12.533434 | orchestrator | TASK [nova-cell : Copy over ceph cinder keyring file] ************************** 2026-01-17 01:16:12.533440 | orchestrator | Saturday 17 January 2026 01:12:09 +0000 (0:00:01.497) 0:04:33.649 ****** 2026-01-17 01:16:12.533447 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2026-01-17 01:16:12.533454 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2026-01-17 01:16:12.533460 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2026-01-17 01:16:12.533465 | orchestrator | 2026-01-17 01:16:12.533472 | orchestrator | TASK [nova-cell : Copy over ceph.conf] ***************************************** 2026-01-17 01:16:12.533478 | orchestrator | Saturday 17 January 2026 01:12:10 +0000 (0:00:01.186) 0:04:34.836 ****** 2026-01-17 01:16:12.533484 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2026-01-17 01:16:12.533490 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2026-01-17 01:16:12.533496 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2026-01-17 01:16:12.533502 | orchestrator | changed: [testbed-node-3] => (item=nova-libvirt) 2026-01-17 01:16:12.533508 | orchestrator | changed: [testbed-node-4] => (item=nova-libvirt) 2026-01-17 01:16:12.533515 | orchestrator | changed: [testbed-node-5] => (item=nova-libvirt) 2026-01-17 01:16:12.533521 | orchestrator | 2026-01-17 01:16:12.533527 | orchestrator | TASK [nova-cell : Ensure /etc/ceph directory exists (host libvirt)] ************ 2026-01-17 01:16:12.533533 | orchestrator | Saturday 17 January 2026 01:12:14 +0000 (0:00:04.029) 0:04:38.865 ****** 2026-01-17 01:16:12.533539 | orchestrator | skipping: [testbed-node-3] 2026-01-17 01:16:12.533546 | orchestrator | skipping: [testbed-node-4] 2026-01-17 01:16:12.533552 | orchestrator | skipping: [testbed-node-5] 2026-01-17 01:16:12.533558 | orchestrator | 2026-01-17 01:16:12.533565 | orchestrator | TASK [nova-cell : Copy over ceph.conf (host libvirt)] ************************** 2026-01-17 01:16:12.533572 | orchestrator | Saturday 17 January 2026 01:12:15 +0000 (0:00:00.551) 0:04:39.417 ****** 2026-01-17 01:16:12.533578 | orchestrator | skipping: [testbed-node-3] 2026-01-17 01:16:12.533585 | orchestrator | skipping: [testbed-node-4] 2026-01-17 01:16:12.533591 | orchestrator | skipping: [testbed-node-5] 2026-01-17 01:16:12.533598 | orchestrator | 2026-01-17 01:16:12.533605 | orchestrator | TASK [nova-cell : Ensuring libvirt secrets directory exists] ******************* 2026-01-17 01:16:12.533611 | orchestrator | Saturday 17 January 2026 01:12:15 +0000 (0:00:00.382) 0:04:39.799 ****** 2026-01-17 01:16:12.533618 | orchestrator | changed: [testbed-node-3] 2026-01-17 01:16:12.533625 | orchestrator | changed: [testbed-node-4] 2026-01-17 01:16:12.533631 | orchestrator | changed: [testbed-node-5] 2026-01-17 01:16:12.533638 | orchestrator | 2026-01-17 01:16:12.533649 | orchestrator | TASK [nova-cell : Pushing nova secret xml for libvirt] ************************* 2026-01-17 01:16:12.533656 | orchestrator | Saturday 17 January 2026 01:12:17 +0000 (0:00:01.294) 0:04:41.094 ****** 2026-01-17 01:16:12.533695 | orchestrator | changed: [testbed-node-3] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2026-01-17 01:16:12.533709 | orchestrator | changed: [testbed-node-4] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2026-01-17 01:16:12.533716 | orchestrator | changed: [testbed-node-5] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2026-01-17 01:16:12.533724 | orchestrator | changed: [testbed-node-4] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2026-01-17 01:16:12.533731 | orchestrator | changed: [testbed-node-3] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2026-01-17 01:16:12.533738 | orchestrator | changed: [testbed-node-5] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2026-01-17 01:16:12.533745 | orchestrator | 2026-01-17 01:16:12.533752 | orchestrator | TASK [nova-cell : Pushing secrets key for libvirt] ***************************** 2026-01-17 01:16:12.533759 | orchestrator | Saturday 17 January 2026 01:12:20 +0000 (0:00:03.661) 0:04:44.755 ****** 2026-01-17 01:16:12.533766 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-01-17 01:16:12.533772 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-01-17 01:16:12.533778 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-01-17 01:16:12.533784 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-01-17 01:16:12.533790 | orchestrator | changed: [testbed-node-3] 2026-01-17 01:16:12.533796 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-01-17 01:16:12.533802 | orchestrator | changed: [testbed-node-4] 2026-01-17 01:16:12.533808 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-01-17 01:16:12.533815 | orchestrator | changed: [testbed-node-5] 2026-01-17 01:16:12.533821 | orchestrator | 2026-01-17 01:16:12.533828 | orchestrator | TASK [nova-cell : Check if policies shall be overwritten] ********************** 2026-01-17 01:16:12.533835 | orchestrator | Saturday 17 January 2026 01:12:24 +0000 (0:00:03.789) 0:04:48.545 ****** 2026-01-17 01:16:12.533842 | orchestrator | skipping: [testbed-node-3] 2026-01-17 01:16:12.533848 | orchestrator | 2026-01-17 01:16:12.533855 | orchestrator | TASK [nova-cell : Set nova policy file] **************************************** 2026-01-17 01:16:12.533860 | orchestrator | Saturday 17 January 2026 01:12:24 +0000 (0:00:00.132) 0:04:48.678 ****** 2026-01-17 01:16:12.533866 | orchestrator | skipping: [testbed-node-3] 2026-01-17 01:16:12.533873 | orchestrator | skipping: [testbed-node-4] 2026-01-17 01:16:12.533879 | orchestrator | skipping: [testbed-node-5] 2026-01-17 01:16:12.533885 | orchestrator | skipping: [testbed-node-0] 2026-01-17 01:16:12.533890 | orchestrator | skipping: [testbed-node-1] 2026-01-17 01:16:12.533896 | orchestrator | skipping: [testbed-node-2] 2026-01-17 01:16:12.533902 | orchestrator | 2026-01-17 01:16:12.533908 | orchestrator | TASK [nova-cell : Check for vendordata file] *********************************** 2026-01-17 01:16:12.533914 | orchestrator | Saturday 17 January 2026 01:12:25 +0000 (0:00:00.644) 0:04:49.322 ****** 2026-01-17 01:16:12.533921 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-01-17 01:16:12.533927 | orchestrator | 2026-01-17 01:16:12.533934 | orchestrator | TASK [nova-cell : Set vendordata file path] ************************************ 2026-01-17 01:16:12.533940 | orchestrator | Saturday 17 January 2026 01:12:26 +0000 (0:00:00.748) 0:04:50.071 ****** 2026-01-17 01:16:12.533946 | orchestrator | skipping: [testbed-node-3] 2026-01-17 01:16:12.533952 | orchestrator | skipping: [testbed-node-4] 2026-01-17 01:16:12.533958 | orchestrator | skipping: [testbed-node-5] 2026-01-17 01:16:12.533964 | orchestrator | skipping: [testbed-node-0] 2026-01-17 01:16:12.533970 | orchestrator | skipping: [testbed-node-1] 2026-01-17 01:16:12.533977 | orchestrator | skipping: [testbed-node-2] 2026-01-17 01:16:12.533983 | orchestrator | 2026-01-17 01:16:12.533990 | orchestrator | TASK [nova-cell : Copying over config.json files for services] ***************** 2026-01-17 01:16:12.533996 | orchestrator | Saturday 17 January 2026 01:12:26 +0000 (0:00:00.831) 0:04:50.903 ****** 2026-01-17 01:16:12.534009 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-01-17 01:16:12.534050 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-01-17 01:16:12.534057 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-01-17 01:16:12.534064 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-01-17 01:16:12.534072 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-01-17 01:16:12.534078 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-01-17 01:16:12.534089 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-01-17 01:16:12.534102 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-01-17 01:16:12.534109 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-01-17 01:16:12.534116 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-01-17 01:16:12.534122 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-01-17 01:16:12.534128 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-01-17 01:16:12.534138 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-01-17 01:16:12.534152 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-01-17 01:16:12.534163 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-01-17 01:16:12.534170 | orchestrator | 2026-01-17 01:16:12.534177 | orchestrator | TASK [nova-cell : Copying over nova.conf] ************************************** 2026-01-17 01:16:12.534183 | orchestrator | Saturday 17 January 2026 01:12:30 +0000 (0:00:03.644) 0:04:54.548 ****** 2026-01-17 01:16:12.534189 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-01-17 01:16:12.534196 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-01-17 01:16:12.534207 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-01-17 01:16:12.534213 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-01-17 01:16:12.534226 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-01-17 01:16:12.534233 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-01-17 01:16:12.534240 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-01-17 01:16:12.534247 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-01-17 01:16:12.534258 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-01-17 01:16:12.534268 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-01-17 01:16:12.534280 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-01-17 01:16:12.534302 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-01-17 01:16:12.534308 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-01-17 01:16:12.534315 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-01-17 01:16:12.534326 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-01-17 01:16:12.534332 | orchestrator | 2026-01-17 01:16:12.534339 | orchestrator | TASK [nova-cell : Copying over Nova compute provider config] ******************* 2026-01-17 01:16:12.534346 | orchestrator | Saturday 17 January 2026 01:12:37 +0000 (0:00:06.592) 0:05:01.140 ****** 2026-01-17 01:16:12.534352 | orchestrator | skipping: [testbed-node-3] 2026-01-17 01:16:12.534359 | orchestrator | skipping: [testbed-node-4] 2026-01-17 01:16:12.534366 | orchestrator | skipping: [testbed-node-5] 2026-01-17 01:16:12.534372 | orchestrator | skipping: [testbed-node-1] 2026-01-17 01:16:12.534379 | orchestrator | skipping: [testbed-node-0] 2026-01-17 01:16:12.534386 | orchestrator | skipping: [testbed-node-2] 2026-01-17 01:16:12.534392 | orchestrator | 2026-01-17 01:16:12.534398 | orchestrator | TASK [nova-cell : Copying over libvirt configuration] ************************** 2026-01-17 01:16:12.534404 | orchestrator | Saturday 17 January 2026 01:12:38 +0000 (0:00:01.432) 0:05:02.572 ****** 2026-01-17 01:16:12.534411 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2026-01-17 01:16:12.534417 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2026-01-17 01:16:12.534423 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2026-01-17 01:16:12.534429 | orchestrator | changed: [testbed-node-3] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2026-01-17 01:16:12.534435 | orchestrator | changed: [testbed-node-4] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2026-01-17 01:16:12.534445 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2026-01-17 01:16:12.534451 | orchestrator | skipping: [testbed-node-0] 2026-01-17 01:16:12.534458 | orchestrator | changed: [testbed-node-5] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2026-01-17 01:16:12.534469 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2026-01-17 01:16:12.534475 | orchestrator | skipping: [testbed-node-1] 2026-01-17 01:16:12.534482 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2026-01-17 01:16:12.534488 | orchestrator | skipping: [testbed-node-2] 2026-01-17 01:16:12.534494 | orchestrator | changed: [testbed-node-4] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2026-01-17 01:16:12.534501 | orchestrator | changed: [testbed-node-3] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2026-01-17 01:16:12.534507 | orchestrator | changed: [testbed-node-5] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2026-01-17 01:16:12.534514 | orchestrator | 2026-01-17 01:16:12.534519 | orchestrator | TASK [nova-cell : Copying over libvirt TLS keys] ******************************* 2026-01-17 01:16:12.534526 | orchestrator | Saturday 17 January 2026 01:12:42 +0000 (0:00:03.950) 0:05:06.523 ****** 2026-01-17 01:16:12.534532 | orchestrator | skipping: [testbed-node-3] 2026-01-17 01:16:12.534539 | orchestrator | skipping: [testbed-node-4] 2026-01-17 01:16:12.534545 | orchestrator | skipping: [testbed-node-5] 2026-01-17 01:16:12.534556 | orchestrator | skipping: [testbed-node-0] 2026-01-17 01:16:12.534562 | orchestrator | skipping: [testbed-node-1] 2026-01-17 01:16:12.534568 | orchestrator | skipping: [testbed-node-2] 2026-01-17 01:16:12.534575 | orchestrator | 2026-01-17 01:16:12.534581 | orchestrator | TASK [nova-cell : Copying over libvirt SASL configuration] ********************* 2026-01-17 01:16:12.534587 | orchestrator | Saturday 17 January 2026 01:12:43 +0000 (0:00:00.650) 0:05:07.173 ****** 2026-01-17 01:16:12.534593 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2026-01-17 01:16:12.534599 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2026-01-17 01:16:12.534605 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2026-01-17 01:16:12.534611 | orchestrator | changed: [testbed-node-3] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2026-01-17 01:16:12.534617 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2026-01-17 01:16:12.534624 | orchestrator | changed: [testbed-node-4] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2026-01-17 01:16:12.534630 | orchestrator | changed: [testbed-node-5] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2026-01-17 01:16:12.534636 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2026-01-17 01:16:12.534642 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2026-01-17 01:16:12.534648 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2026-01-17 01:16:12.534655 | orchestrator | skipping: [testbed-node-0] 2026-01-17 01:16:12.534661 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2026-01-17 01:16:12.534667 | orchestrator | skipping: [testbed-node-1] 2026-01-17 01:16:12.534673 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2026-01-17 01:16:12.534679 | orchestrator | skipping: [testbed-node-2] 2026-01-17 01:16:12.534685 | orchestrator | changed: [testbed-node-4] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2026-01-17 01:16:12.534692 | orchestrator | changed: [testbed-node-3] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2026-01-17 01:16:12.534698 | orchestrator | changed: [testbed-node-5] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2026-01-17 01:16:12.534704 | orchestrator | changed: [testbed-node-3] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2026-01-17 01:16:12.534710 | orchestrator | changed: [testbed-node-5] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2026-01-17 01:16:12.534717 | orchestrator | changed: [testbed-node-4] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2026-01-17 01:16:12.534723 | orchestrator | 2026-01-17 01:16:12.534730 | orchestrator | TASK [nova-cell : Copying files for nova-ssh] ********************************** 2026-01-17 01:16:12.534736 | orchestrator | Saturday 17 January 2026 01:12:48 +0000 (0:00:05.519) 0:05:12.692 ****** 2026-01-17 01:16:12.534742 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2026-01-17 01:16:12.534748 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2026-01-17 01:16:12.534754 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2026-01-17 01:16:12.534763 | orchestrator | changed: [testbed-node-4] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-01-17 01:16:12.534774 | orchestrator | changed: [testbed-node-3] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-01-17 01:16:12.534780 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2026-01-17 01:16:12.534791 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2026-01-17 01:16:12.534797 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2026-01-17 01:16:12.534804 | orchestrator | changed: [testbed-node-5] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-01-17 01:16:12.534810 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-01-17 01:16:12.534817 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-01-17 01:16:12.534823 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-01-17 01:16:12.534829 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2026-01-17 01:16:12.534835 | orchestrator | skipping: [testbed-node-2] 2026-01-17 01:16:12.534841 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2026-01-17 01:16:12.534847 | orchestrator | skipping: [testbed-node-1] 2026-01-17 01:16:12.534853 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2026-01-17 01:16:12.534859 | orchestrator | skipping: [testbed-node-0] 2026-01-17 01:16:12.534866 | orchestrator | changed: [testbed-node-3] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-01-17 01:16:12.534872 | orchestrator | changed: [testbed-node-4] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-01-17 01:16:12.534878 | orchestrator | changed: [testbed-node-5] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-01-17 01:16:12.534884 | orchestrator | changed: [testbed-node-3] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-01-17 01:16:12.534890 | orchestrator | changed: [testbed-node-4] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-01-17 01:16:12.534897 | orchestrator | changed: [testbed-node-5] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-01-17 01:16:12.534903 | orchestrator | changed: [testbed-node-4] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-01-17 01:16:12.534909 | orchestrator | changed: [testbed-node-3] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-01-17 01:16:12.534915 | orchestrator | changed: [testbed-node-5] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-01-17 01:16:12.534921 | orchestrator | 2026-01-17 01:16:12.534927 | orchestrator | TASK [nova-cell : Copying VMware vCenter CA file] ****************************** 2026-01-17 01:16:12.534932 | orchestrator | Saturday 17 January 2026 01:12:56 +0000 (0:00:07.439) 0:05:20.132 ****** 2026-01-17 01:16:12.534938 | orchestrator | skipping: [testbed-node-3] 2026-01-17 01:16:12.534944 | orchestrator | skipping: [testbed-node-4] 2026-01-17 01:16:12.534950 | orchestrator | skipping: [testbed-node-5] 2026-01-17 01:16:12.534956 | orchestrator | skipping: [testbed-node-0] 2026-01-17 01:16:12.534962 | orchestrator | skipping: [testbed-node-1] 2026-01-17 01:16:12.534968 | orchestrator | skipping: [testbed-node-2] 2026-01-17 01:16:12.534974 | orchestrator | 2026-01-17 01:16:12.534980 | orchestrator | TASK [nova-cell : Copying 'release' file for nova_compute] ********************* 2026-01-17 01:16:12.534986 | orchestrator | Saturday 17 January 2026 01:12:57 +0000 (0:00:00.828) 0:05:20.961 ****** 2026-01-17 01:16:12.534992 | orchestrator | skipping: [testbed-node-3] 2026-01-17 01:16:12.534997 | orchestrator | skipping: [testbed-node-4] 2026-01-17 01:16:12.535003 | orchestrator | skipping: [testbed-node-5] 2026-01-17 01:16:12.535009 | orchestrator | skipping: [testbed-node-0] 2026-01-17 01:16:12.535015 | orchestrator | skipping: [testbed-node-1] 2026-01-17 01:16:12.535020 | orchestrator | skipping: [testbed-node-2] 2026-01-17 01:16:12.535027 | orchestrator | 2026-01-17 01:16:12.535033 | orchestrator | TASK [nova-cell : Generating 'hostnqn' file for nova_compute] ****************** 2026-01-17 01:16:12.535045 | orchestrator | Saturday 17 January 2026 01:12:57 +0000 (0:00:00.612) 0:05:21.574 ****** 2026-01-17 01:16:12.535052 | orchestrator | skipping: [testbed-node-1] 2026-01-17 01:16:12.535059 | orchestrator | skipping: [testbed-node-2] 2026-01-17 01:16:12.535065 | orchestrator | skipping: [testbed-node-0] 2026-01-17 01:16:12.535072 | orchestrator | changed: [testbed-node-3] 2026-01-17 01:16:12.535076 | orchestrator | changed: [testbed-node-4] 2026-01-17 01:16:12.535080 | orchestrator | changed: [testbed-node-5] 2026-01-17 01:16:12.535084 | orchestrator | 2026-01-17 01:16:12.535087 | orchestrator | TASK [nova-cell : Copying over existing policy file] *************************** 2026-01-17 01:16:12.535091 | orchestrator | Saturday 17 January 2026 01:12:59 +0000 (0:00:02.225) 0:05:23.799 ****** 2026-01-17 01:16:12.535099 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-01-17 01:16:12.535108 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-01-17 01:16:12.535113 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-01-17 01:16:12.535117 | orchestrator | skipping: [testbed-node-3] 2026-01-17 01:16:12.535121 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-01-17 01:16:12.535126 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-01-17 01:16:12.535132 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-01-17 01:16:12.535141 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-01-17 01:16:12.535145 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-01-17 01:16:12.535149 | orchestrator | skipping: [testbed-node-4] 2026-01-17 01:16:12.535153 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-01-17 01:16:12.535157 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-01-17 01:16:12.535164 | orchestrator | skipping: [testbed-node-5] 2026-01-17 01:16:12.535168 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-01-17 01:16:12.535172 | orchestrator | skipping: [testbed-node-0] 2026-01-17 01:16:12.535176 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-01-17 01:16:12.535184 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-01-17 01:16:12.535191 | orchestrator | skipping: [testbed-node-1] 2026-01-17 01:16:12.535195 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-01-17 01:16:12.535199 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-01-17 01:16:12.535203 | orchestrator | skipping: [testbed-node-2] 2026-01-17 01:16:12.535207 | orchestrator | 2026-01-17 01:16:12.535210 | orchestrator | TASK [nova-cell : Copying over vendordata file to containers] ****************** 2026-01-17 01:16:12.535214 | orchestrator | Saturday 17 January 2026 01:13:01 +0000 (0:00:01.621) 0:05:25.421 ****** 2026-01-17 01:16:12.535218 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute)  2026-01-17 01:16:12.535222 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute-ironic)  2026-01-17 01:16:12.535226 | orchestrator | skipping: [testbed-node-3] 2026-01-17 01:16:12.535232 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute)  2026-01-17 01:16:12.535236 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute-ironic)  2026-01-17 01:16:12.535240 | orchestrator | skipping: [testbed-node-4] 2026-01-17 01:16:12.535243 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute)  2026-01-17 01:16:12.535247 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute-ironic)  2026-01-17 01:16:12.535251 | orchestrator | skipping: [testbed-node-5] 2026-01-17 01:16:12.535255 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute)  2026-01-17 01:16:12.535259 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute-ironic)  2026-01-17 01:16:12.535262 | orchestrator | skipping: [testbed-node-0] 2026-01-17 01:16:12.535266 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute)  2026-01-17 01:16:12.535270 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute-ironic)  2026-01-17 01:16:12.535274 | orchestrator | skipping: [testbed-node-1] 2026-01-17 01:16:12.535280 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute)  2026-01-17 01:16:12.535301 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute-ironic)  2026-01-17 01:16:12.535306 | orchestrator | skipping: [testbed-node-2] 2026-01-17 01:16:12.535313 | orchestrator | 2026-01-17 01:16:12.535318 | orchestrator | TASK [nova-cell : Check nova-cell containers] ********************************** 2026-01-17 01:16:12.535324 | orchestrator | Saturday 17 January 2026 01:13:02 +0000 (0:00:00.925) 0:05:26.347 ****** 2026-01-17 01:16:12.535330 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-01-17 01:16:12.535345 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-01-17 01:16:12.535352 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-01-17 01:16:12.535363 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-01-17 01:16:12.535368 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-01-17 01:16:12.535372 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-01-17 01:16:12.535376 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-01-17 01:16:12.535382 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-01-17 01:16:12.535389 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-01-17 01:16:12.535393 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-01-17 01:16:12.535400 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-01-17 01:16:12.535404 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-01-17 01:16:12.535411 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-01-17 01:16:12.535417 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-01-17 01:16:12.535430 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-01-17 01:16:12.535438 | orchestrator | 2026-01-17 01:16:12.535444 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2026-01-17 01:16:12.535455 | orchestrator | Saturday 17 January 2026 01:13:05 +0000 (0:00:03.200) 0:05:29.547 ****** 2026-01-17 01:16:12.535461 | orchestrator | skipping: [testbed-node-3] 2026-01-17 01:16:12.535466 | orchestrator | skipping: [testbed-node-4] 2026-01-17 01:16:12.535470 | orchestrator | skipping: [testbed-node-5] 2026-01-17 01:16:12.535474 | orchestrator | skipping: [testbed-node-0] 2026-01-17 01:16:12.535478 | orchestrator | skipping: [testbed-node-1] 2026-01-17 01:16:12.535482 | orchestrator | skipping: [testbed-node-2] 2026-01-17 01:16:12.535485 | orchestrator | 2026-01-17 01:16:12.535489 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-01-17 01:16:12.535493 | orchestrator | Saturday 17 January 2026 01:13:06 +0000 (0:00:00.838) 0:05:30.385 ****** 2026-01-17 01:16:12.535496 | orchestrator | 2026-01-17 01:16:12.535500 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-01-17 01:16:12.535504 | orchestrator | Saturday 17 January 2026 01:13:06 +0000 (0:00:00.136) 0:05:30.522 ****** 2026-01-17 01:16:12.535508 | orchestrator | 2026-01-17 01:16:12.535511 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-01-17 01:16:12.535515 | orchestrator | Saturday 17 January 2026 01:13:06 +0000 (0:00:00.135) 0:05:30.657 ****** 2026-01-17 01:16:12.535519 | orchestrator | 2026-01-17 01:16:12.535522 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-01-17 01:16:12.535526 | orchestrator | Saturday 17 January 2026 01:13:06 +0000 (0:00:00.140) 0:05:30.798 ****** 2026-01-17 01:16:12.535530 | orchestrator | 2026-01-17 01:16:12.535534 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-01-17 01:16:12.535537 | orchestrator | Saturday 17 January 2026 01:13:06 +0000 (0:00:00.140) 0:05:30.938 ****** 2026-01-17 01:16:12.535541 | orchestrator | 2026-01-17 01:16:12.535545 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-01-17 01:16:12.535548 | orchestrator | Saturday 17 January 2026 01:13:07 +0000 (0:00:00.132) 0:05:31.071 ****** 2026-01-17 01:16:12.535552 | orchestrator | 2026-01-17 01:16:12.535556 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-conductor container] ***************** 2026-01-17 01:16:12.535560 | orchestrator | Saturday 17 January 2026 01:13:07 +0000 (0:00:00.356) 0:05:31.428 ****** 2026-01-17 01:16:12.535563 | orchestrator | changed: [testbed-node-0] 2026-01-17 01:16:12.535567 | orchestrator | changed: [testbed-node-1] 2026-01-17 01:16:12.535571 | orchestrator | changed: [testbed-node-2] 2026-01-17 01:16:12.535575 | orchestrator | 2026-01-17 01:16:12.535578 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-novncproxy container] **************** 2026-01-17 01:16:12.535582 | orchestrator | Saturday 17 January 2026 01:13:19 +0000 (0:00:12.472) 0:05:43.901 ****** 2026-01-17 01:16:12.535586 | orchestrator | changed: [testbed-node-0] 2026-01-17 01:16:12.535589 | orchestrator | changed: [testbed-node-1] 2026-01-17 01:16:12.535593 | orchestrator | changed: [testbed-node-2] 2026-01-17 01:16:12.535597 | orchestrator | 2026-01-17 01:16:12.535601 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-ssh container] *********************** 2026-01-17 01:16:12.535604 | orchestrator | Saturday 17 January 2026 01:13:32 +0000 (0:00:12.806) 0:05:56.707 ****** 2026-01-17 01:16:12.535608 | orchestrator | changed: [testbed-node-5] 2026-01-17 01:16:12.535612 | orchestrator | changed: [testbed-node-3] 2026-01-17 01:16:12.535616 | orchestrator | changed: [testbed-node-4] 2026-01-17 01:16:12.535619 | orchestrator | 2026-01-17 01:16:12.535623 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-libvirt container] ******************* 2026-01-17 01:16:12.535627 | orchestrator | Saturday 17 January 2026 01:13:54 +0000 (0:00:21.927) 0:06:18.634 ****** 2026-01-17 01:16:12.535630 | orchestrator | changed: [testbed-node-5] 2026-01-17 01:16:12.535634 | orchestrator | changed: [testbed-node-3] 2026-01-17 01:16:12.535638 | orchestrator | changed: [testbed-node-4] 2026-01-17 01:16:12.535642 | orchestrator | 2026-01-17 01:16:12.535645 | orchestrator | RUNNING HANDLER [nova-cell : Checking libvirt container is ready] ************** 2026-01-17 01:16:12.535649 | orchestrator | Saturday 17 January 2026 01:14:22 +0000 (0:00:27.969) 0:06:46.604 ****** 2026-01-17 01:16:12.535655 | orchestrator | FAILED - RETRYING: [testbed-node-4]: Checking libvirt container is ready (10 retries left). 2026-01-17 01:16:12.535659 | orchestrator | FAILED - RETRYING: [testbed-node-3]: Checking libvirt container is ready (10 retries left). 2026-01-17 01:16:12.535663 | orchestrator | FAILED - RETRYING: [testbed-node-5]: Checking libvirt container is ready (10 retries left). 2026-01-17 01:16:12.535667 | orchestrator | changed: [testbed-node-4] 2026-01-17 01:16:12.535671 | orchestrator | changed: [testbed-node-3] 2026-01-17 01:16:12.535674 | orchestrator | changed: [testbed-node-5] 2026-01-17 01:16:12.535678 | orchestrator | 2026-01-17 01:16:12.535682 | orchestrator | RUNNING HANDLER [nova-cell : Create libvirt SASL user] ************************* 2026-01-17 01:16:12.535686 | orchestrator | Saturday 17 January 2026 01:14:28 +0000 (0:00:06.292) 0:06:52.897 ****** 2026-01-17 01:16:12.535689 | orchestrator | changed: [testbed-node-3] 2026-01-17 01:16:12.535693 | orchestrator | changed: [testbed-node-4] 2026-01-17 01:16:12.535697 | orchestrator | changed: [testbed-node-5] 2026-01-17 01:16:12.535700 | orchestrator | 2026-01-17 01:16:12.535706 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-compute container] ******************* 2026-01-17 01:16:12.535710 | orchestrator | Saturday 17 January 2026 01:14:29 +0000 (0:00:00.734) 0:06:53.631 ****** 2026-01-17 01:16:12.535714 | orchestrator | changed: [testbed-node-3] 2026-01-17 01:16:12.535718 | orchestrator | changed: [testbed-node-4] 2026-01-17 01:16:12.535721 | orchestrator | changed: [testbed-node-5] 2026-01-17 01:16:12.535725 | orchestrator | 2026-01-17 01:16:12.535731 | orchestrator | RUNNING HANDLER [nova-cell : Wait for nova-compute services to update service versions] *** 2026-01-17 01:16:12.535735 | orchestrator | Saturday 17 January 2026 01:14:55 +0000 (0:00:25.424) 0:07:19.056 ****** 2026-01-17 01:16:12.535739 | orchestrator | skipping: [testbed-node-3] 2026-01-17 01:16:12.535743 | orchestrator | 2026-01-17 01:16:12.535746 | orchestrator | TASK [nova-cell : Waiting for nova-compute services to register themselves] **** 2026-01-17 01:16:12.535750 | orchestrator | Saturday 17 January 2026 01:14:55 +0000 (0:00:00.124) 0:07:19.180 ****** 2026-01-17 01:16:12.535754 | orchestrator | skipping: [testbed-node-3] 2026-01-17 01:16:12.535758 | orchestrator | skipping: [testbed-node-0] 2026-01-17 01:16:12.535761 | orchestrator | skipping: [testbed-node-2] 2026-01-17 01:16:12.535775 | orchestrator | skipping: [testbed-node-1] 2026-01-17 01:16:12.535779 | orchestrator | skipping: [testbed-node-4] 2026-01-17 01:16:12.535782 | orchestrator | FAILED - RETRYING: [testbed-node-5 -> testbed-node-0]: Waiting for nova-compute services to register themselves (20 retries left). 2026-01-17 01:16:12.535786 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2026-01-17 01:16:12.535790 | orchestrator | 2026-01-17 01:16:12.535794 | orchestrator | TASK [nova-cell : Fail if nova-compute service failed to register] ************* 2026-01-17 01:16:12.535798 | orchestrator | Saturday 17 January 2026 01:15:18 +0000 (0:00:23.612) 0:07:42.792 ****** 2026-01-17 01:16:12.535801 | orchestrator | skipping: [testbed-node-2] 2026-01-17 01:16:12.535805 | orchestrator | skipping: [testbed-node-3] 2026-01-17 01:16:12.535809 | orchestrator | skipping: [testbed-node-1] 2026-01-17 01:16:12.535813 | orchestrator | skipping: [testbed-node-5] 2026-01-17 01:16:12.535816 | orchestrator | skipping: [testbed-node-0] 2026-01-17 01:16:12.535820 | orchestrator | skipping: [testbed-node-4] 2026-01-17 01:16:12.535824 | orchestrator | 2026-01-17 01:16:12.535827 | orchestrator | TASK [nova-cell : Include discover_computes.yml] ******************************* 2026-01-17 01:16:12.535831 | orchestrator | Saturday 17 January 2026 01:15:28 +0000 (0:00:09.201) 0:07:51.993 ****** 2026-01-17 01:16:12.535835 | orchestrator | skipping: [testbed-node-3] 2026-01-17 01:16:12.535839 | orchestrator | skipping: [testbed-node-4] 2026-01-17 01:16:12.535842 | orchestrator | skipping: [testbed-node-1] 2026-01-17 01:16:12.535846 | orchestrator | skipping: [testbed-node-0] 2026-01-17 01:16:12.535850 | orchestrator | skipping: [testbed-node-2] 2026-01-17 01:16:12.535854 | orchestrator | included: /ansible/roles/nova-cell/tasks/discover_computes.yml for testbed-node-5 2026-01-17 01:16:12.535857 | orchestrator | 2026-01-17 01:16:12.535861 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2026-01-17 01:16:12.535868 | orchestrator | Saturday 17 January 2026 01:15:32 +0000 (0:00:04.207) 0:07:56.201 ****** 2026-01-17 01:16:12.535871 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2026-01-17 01:16:12.535875 | orchestrator | 2026-01-17 01:16:12.535879 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2026-01-17 01:16:12.535883 | orchestrator | Saturday 17 January 2026 01:15:46 +0000 (0:00:14.233) 0:08:10.435 ****** 2026-01-17 01:16:12.535886 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2026-01-17 01:16:12.535890 | orchestrator | 2026-01-17 01:16:12.535894 | orchestrator | TASK [nova-cell : Fail if cell settings not found] ***************************** 2026-01-17 01:16:12.535898 | orchestrator | Saturday 17 January 2026 01:15:47 +0000 (0:00:01.337) 0:08:11.772 ****** 2026-01-17 01:16:12.535901 | orchestrator | skipping: [testbed-node-5] 2026-01-17 01:16:12.535905 | orchestrator | 2026-01-17 01:16:12.535909 | orchestrator | TASK [nova-cell : Discover nova hosts] ***************************************** 2026-01-17 01:16:12.535913 | orchestrator | Saturday 17 January 2026 01:15:49 +0000 (0:00:01.392) 0:08:13.165 ****** 2026-01-17 01:16:12.535916 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2026-01-17 01:16:12.535920 | orchestrator | 2026-01-17 01:16:12.535924 | orchestrator | TASK [nova-cell : Remove old nova_libvirt_secrets container volume] ************ 2026-01-17 01:16:12.535927 | orchestrator | Saturday 17 January 2026 01:16:02 +0000 (0:00:13.090) 0:08:26.256 ****** 2026-01-17 01:16:12.535931 | orchestrator | ok: [testbed-node-3] 2026-01-17 01:16:12.535935 | orchestrator | ok: [testbed-node-4] 2026-01-17 01:16:12.535939 | orchestrator | ok: [testbed-node-5] 2026-01-17 01:16:12.535943 | orchestrator | ok: [testbed-node-0] 2026-01-17 01:16:12.535946 | orchestrator | ok: [testbed-node-1] 2026-01-17 01:16:12.535950 | orchestrator | ok: [testbed-node-2] 2026-01-17 01:16:12.535954 | orchestrator | 2026-01-17 01:16:12.535957 | orchestrator | PLAY [Refresh nova scheduler cell cache] *************************************** 2026-01-17 01:16:12.535961 | orchestrator | 2026-01-17 01:16:12.535965 | orchestrator | TASK [nova : Refresh cell cache in nova scheduler] ***************************** 2026-01-17 01:16:12.535969 | orchestrator | Saturday 17 January 2026 01:16:04 +0000 (0:00:01.779) 0:08:28.035 ****** 2026-01-17 01:16:12.535973 | orchestrator | changed: [testbed-node-0] 2026-01-17 01:16:12.535976 | orchestrator | changed: [testbed-node-1] 2026-01-17 01:16:12.535980 | orchestrator | changed: [testbed-node-2] 2026-01-17 01:16:12.535984 | orchestrator | 2026-01-17 01:16:12.535989 | orchestrator | PLAY [Reload global Nova super conductor services] ***************************** 2026-01-17 01:16:12.535996 | orchestrator | 2026-01-17 01:16:12.536002 | orchestrator | TASK [nova : Reload nova super conductor services to remove RPC version pin] *** 2026-01-17 01:16:12.536009 | orchestrator | Saturday 17 January 2026 01:16:05 +0000 (0:00:01.110) 0:08:29.146 ****** 2026-01-17 01:16:12.536015 | orchestrator | skipping: [testbed-node-0] 2026-01-17 01:16:12.536021 | orchestrator | skipping: [testbed-node-1] 2026-01-17 01:16:12.536027 | orchestrator | skipping: [testbed-node-2] 2026-01-17 01:16:12.536031 | orchestrator | 2026-01-17 01:16:12.536035 | orchestrator | PLAY [Reload Nova cell services] *********************************************** 2026-01-17 01:16:12.536038 | orchestrator | 2026-01-17 01:16:12.536042 | orchestrator | TASK [nova-cell : Reload nova cell services to remove RPC version cap] ********* 2026-01-17 01:16:12.536046 | orchestrator | Saturday 17 January 2026 01:16:05 +0000 (0:00:00.586) 0:08:29.733 ****** 2026-01-17 01:16:12.536050 | orchestrator | skipping: [testbed-node-3] => (item=nova-conductor)  2026-01-17 01:16:12.536057 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute)  2026-01-17 01:16:12.536061 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute-ironic)  2026-01-17 01:16:12.536065 | orchestrator | skipping: [testbed-node-3] => (item=nova-novncproxy)  2026-01-17 01:16:12.536071 | orchestrator | skipping: [testbed-node-3] => (item=nova-serialproxy)  2026-01-17 01:16:12.536075 | orchestrator | skipping: [testbed-node-3] => (item=nova-spicehtml5proxy)  2026-01-17 01:16:12.536079 | orchestrator | skipping: [testbed-node-3] 2026-01-17 01:16:12.536086 | orchestrator | skipping: [testbed-node-4] => (item=nova-conductor)  2026-01-17 01:16:12.536090 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute)  2026-01-17 01:16:12.536093 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute-ironic)  2026-01-17 01:16:12.536097 | orchestrator | skipping: [testbed-node-4] => (item=nova-novncproxy)  2026-01-17 01:16:12.536101 | orchestrator | skipping: [testbed-node-4] => (item=nova-serialproxy)  2026-01-17 01:16:12.536104 | orchestrator | skipping: [testbed-node-4] => (item=nova-spicehtml5proxy)  2026-01-17 01:16:12.536108 | orchestrator | skipping: [testbed-node-4] 2026-01-17 01:16:12.536112 | orchestrator | skipping: [testbed-node-5] => (item=nova-conductor)  2026-01-17 01:16:12.536116 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute)  2026-01-17 01:16:12.536119 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute-ironic)  2026-01-17 01:16:12.536123 | orchestrator | skipping: [testbed-node-5] => (item=nova-novncproxy)  2026-01-17 01:16:12.536127 | orchestrator | skipping: [testbed-node-5] => (item=nova-serialproxy)  2026-01-17 01:16:12.536130 | orchestrator | skipping: [testbed-node-5] => (item=nova-spicehtml5proxy)  2026-01-17 01:16:12.536134 | orchestrator | skipping: [testbed-node-5] 2026-01-17 01:16:12.536138 | orchestrator | skipping: [testbed-node-0] => (item=nova-conductor)  2026-01-17 01:16:12.536142 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute)  2026-01-17 01:16:12.536145 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute-ironic)  2026-01-17 01:16:12.536149 | orchestrator | skipping: [testbed-node-0] => (item=nova-novncproxy)  2026-01-17 01:16:12.536153 | orchestrator | skipping: [testbed-node-0] => (item=nova-serialproxy)  2026-01-17 01:16:12.536156 | orchestrator | skipping: [testbed-node-0] => (item=nova-spicehtml5proxy)  2026-01-17 01:16:12.536160 | orchestrator | skipping: [testbed-node-0] 2026-01-17 01:16:12.536164 | orchestrator | skipping: [testbed-node-1] => (item=nova-conductor)  2026-01-17 01:16:12.536168 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute)  2026-01-17 01:16:12.536171 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute-ironic)  2026-01-17 01:16:12.536175 | orchestrator | skipping: [testbed-node-1] => (item=nova-novncproxy)  2026-01-17 01:16:12.536179 | orchestrator | skipping: [testbed-node-1] => (item=nova-serialproxy)  2026-01-17 01:16:12.536182 | orchestrator | skipping: [testbed-node-1] => (item=nova-spicehtml5proxy)  2026-01-17 01:16:12.536186 | orchestrator | skipping: [testbed-node-1] 2026-01-17 01:16:12.536190 | orchestrator | skipping: [testbed-node-2] => (item=nova-conductor)  2026-01-17 01:16:12.536193 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute)  2026-01-17 01:16:12.536197 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute-ironic)  2026-01-17 01:16:12.536201 | orchestrator | skipping: [testbed-node-2] => (item=nova-novncproxy)  2026-01-17 01:16:12.536204 | orchestrator | skipping: [testbed-node-2] => (item=nova-serialproxy)  2026-01-17 01:16:12.536208 | orchestrator | skipping: [testbed-node-2] => (item=nova-spicehtml5proxy)  2026-01-17 01:16:12.536212 | orchestrator | skipping: [testbed-node-2] 2026-01-17 01:16:12.536216 | orchestrator | 2026-01-17 01:16:12.536219 | orchestrator | PLAY [Reload global Nova API services] ***************************************** 2026-01-17 01:16:12.536223 | orchestrator | 2026-01-17 01:16:12.536227 | orchestrator | TASK [nova : Reload nova API services to remove RPC version pin] *************** 2026-01-17 01:16:12.536231 | orchestrator | Saturday 17 January 2026 01:16:07 +0000 (0:00:01.428) 0:08:31.161 ****** 2026-01-17 01:16:12.536234 | orchestrator | skipping: [testbed-node-0] => (item=nova-scheduler)  2026-01-17 01:16:12.536238 | orchestrator | skipping: [testbed-node-0] => (item=nova-api)  2026-01-17 01:16:12.536242 | orchestrator | skipping: [testbed-node-0] 2026-01-17 01:16:12.536245 | orchestrator | skipping: [testbed-node-1] => (item=nova-scheduler)  2026-01-17 01:16:12.536249 | orchestrator | skipping: [testbed-node-1] => (item=nova-api)  2026-01-17 01:16:12.536253 | orchestrator | skipping: [testbed-node-1] 2026-01-17 01:16:12.536257 | orchestrator | skipping: [testbed-node-2] => (item=nova-scheduler)  2026-01-17 01:16:12.536264 | orchestrator | skipping: [testbed-node-2] => (item=nova-api)  2026-01-17 01:16:12.536267 | orchestrator | skipping: [testbed-node-2] 2026-01-17 01:16:12.536271 | orchestrator | 2026-01-17 01:16:12.536275 | orchestrator | PLAY [Run Nova API online data migrations] ************************************* 2026-01-17 01:16:12.536279 | orchestrator | 2026-01-17 01:16:12.536388 | orchestrator | TASK [nova : Run Nova API online database migrations] ************************** 2026-01-17 01:16:12.536402 | orchestrator | Saturday 17 January 2026 01:16:07 +0000 (0:00:00.755) 0:08:31.916 ****** 2026-01-17 01:16:12.536413 | orchestrator | skipping: [testbed-node-0] 2026-01-17 01:16:12.536421 | orchestrator | 2026-01-17 01:16:12.536425 | orchestrator | PLAY [Run Nova cell online data migrations] ************************************ 2026-01-17 01:16:12.536428 | orchestrator | 2026-01-17 01:16:12.536433 | orchestrator | TASK [nova-cell : Run Nova cell online database migrations] ******************** 2026-01-17 01:16:12.536436 | orchestrator | Saturday 17 January 2026 01:16:08 +0000 (0:00:00.720) 0:08:32.637 ****** 2026-01-17 01:16:12.536440 | orchestrator | skipping: [testbed-node-0] 2026-01-17 01:16:12.536444 | orchestrator | skipping: [testbed-node-1] 2026-01-17 01:16:12.536447 | orchestrator | skipping: [testbed-node-2] 2026-01-17 01:16:12.536451 | orchestrator | 2026-01-17 01:16:12.536455 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-17 01:16:12.536463 | orchestrator | testbed-manager : ok=3  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-17 01:16:12.536467 | orchestrator | testbed-node-0 : ok=54  changed=35  unreachable=0 failed=0 skipped=44  rescued=0 ignored=0 2026-01-17 01:16:12.536476 | orchestrator | testbed-node-1 : ok=27  changed=19  unreachable=0 failed=0 skipped=51  rescued=0 ignored=0 2026-01-17 01:16:12.536480 | orchestrator | testbed-node-2 : ok=27  changed=19  unreachable=0 failed=0 skipped=51  rescued=0 ignored=0 2026-01-17 01:16:12.536484 | orchestrator | testbed-node-3 : ok=38  changed=27  unreachable=0 failed=0 skipped=21  rescued=0 ignored=0 2026-01-17 01:16:12.536488 | orchestrator | testbed-node-4 : ok=37  changed=27  unreachable=0 failed=0 skipped=19  rescued=0 ignored=0 2026-01-17 01:16:12.536492 | orchestrator | testbed-node-5 : ok=42  changed=27  unreachable=0 failed=0 skipped=18  rescued=0 ignored=0 2026-01-17 01:16:12.536495 | orchestrator | 2026-01-17 01:16:12.536499 | orchestrator | 2026-01-17 01:16:12.536503 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-17 01:16:12.536507 | orchestrator | Saturday 17 January 2026 01:16:09 +0000 (0:00:00.476) 0:08:33.114 ****** 2026-01-17 01:16:12.536511 | orchestrator | =============================================================================== 2026-01-17 01:16:12.536514 | orchestrator | nova : Running Nova API bootstrap container ---------------------------- 32.06s 2026-01-17 01:16:12.536518 | orchestrator | nova-cell : Restart nova-libvirt container ----------------------------- 27.97s 2026-01-17 01:16:12.536522 | orchestrator | nova-cell : Restart nova-compute container ----------------------------- 25.42s 2026-01-17 01:16:12.536525 | orchestrator | nova-cell : Waiting for nova-compute services to register themselves --- 23.61s 2026-01-17 01:16:12.536529 | orchestrator | nova-cell : Restart nova-ssh container --------------------------------- 21.93s 2026-01-17 01:16:12.536533 | orchestrator | nova-cell : Running Nova cell bootstrap container ---------------------- 21.40s 2026-01-17 01:16:12.536537 | orchestrator | nova : Running Nova API bootstrap container ---------------------------- 19.44s 2026-01-17 01:16:12.536543 | orchestrator | nova : Create cell0 mappings ------------------------------------------- 17.71s 2026-01-17 01:16:12.536549 | orchestrator | nova : Restart nova-scheduler container -------------------------------- 15.46s 2026-01-17 01:16:12.536565 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 14.23s 2026-01-17 01:16:12.536569 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 13.68s 2026-01-17 01:16:12.536573 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 13.22s 2026-01-17 01:16:12.536577 | orchestrator | nova-cell : Discover nova hosts ---------------------------------------- 13.09s 2026-01-17 01:16:12.536581 | orchestrator | nova-cell : Restart nova-novncproxy container -------------------------- 12.81s 2026-01-17 01:16:12.536584 | orchestrator | nova-cell : Restart nova-conductor container --------------------------- 12.47s 2026-01-17 01:16:12.536588 | orchestrator | nova-cell : Create cell ------------------------------------------------ 12.22s 2026-01-17 01:16:12.536592 | orchestrator | nova-cell : Fail if nova-compute service failed to register ------------- 9.20s 2026-01-17 01:16:12.536596 | orchestrator | service-rabbitmq : nova | Ensure RabbitMQ users exist ------------------- 8.35s 2026-01-17 01:16:12.536599 | orchestrator | nova-cell : Copying files for nova-ssh ---------------------------------- 7.44s 2026-01-17 01:16:12.536603 | orchestrator | service-ks-register : nova | Granting user roles ------------------------ 7.08s 2026-01-17 01:16:12.536607 | orchestrator | 2026-01-17 01:16:12 | INFO  | Wait 1 second(s) until the next check 2026-01-17 01:16:15.578148 | orchestrator | 2026-01-17 01:16:15 | INFO  | Task ded2d698-a8bd-4275-a3c9-8f83c90a5e3a is in state STARTED 2026-01-17 01:16:15.578198 | orchestrator | 2026-01-17 01:16:15 | INFO  | Wait 1 second(s) until the next check 2026-01-17 01:16:18.627502 | orchestrator | 2026-01-17 01:16:18 | INFO  | Task ded2d698-a8bd-4275-a3c9-8f83c90a5e3a is in state STARTED 2026-01-17 01:16:18.627558 | orchestrator | 2026-01-17 01:16:18 | INFO  | Wait 1 second(s) until the next check 2026-01-17 01:16:21.669537 | orchestrator | 2026-01-17 01:16:21 | INFO  | Task ded2d698-a8bd-4275-a3c9-8f83c90a5e3a is in state STARTED 2026-01-17 01:16:21.669580 | orchestrator | 2026-01-17 01:16:21 | INFO  | Wait 1 second(s) until the next check 2026-01-17 01:16:24.708707 | orchestrator | 2026-01-17 01:16:24 | INFO  | Task ded2d698-a8bd-4275-a3c9-8f83c90a5e3a is in state STARTED 2026-01-17 01:16:24.708752 | orchestrator | 2026-01-17 01:16:24 | INFO  | Wait 1 second(s) until the next check 2026-01-17 01:16:27.752708 | orchestrator | 2026-01-17 01:16:27 | INFO  | Task ded2d698-a8bd-4275-a3c9-8f83c90a5e3a is in state STARTED 2026-01-17 01:16:27.752775 | orchestrator | 2026-01-17 01:16:27 | INFO  | Wait 1 second(s) until the next check 2026-01-17 01:16:30.803419 | orchestrator | 2026-01-17 01:16:30 | INFO  | Task ded2d698-a8bd-4275-a3c9-8f83c90a5e3a is in state STARTED 2026-01-17 01:16:30.803480 | orchestrator | 2026-01-17 01:16:30 | INFO  | Wait 1 second(s) until the next check 2026-01-17 01:16:33.856561 | orchestrator | 2026-01-17 01:16:33 | INFO  | Task ded2d698-a8bd-4275-a3c9-8f83c90a5e3a is in state STARTED 2026-01-17 01:16:33.856638 | orchestrator | 2026-01-17 01:16:33 | INFO  | Wait 1 second(s) until the next check 2026-01-17 01:16:36.903835 | orchestrator | 2026-01-17 01:16:36 | INFO  | Task ded2d698-a8bd-4275-a3c9-8f83c90a5e3a is in state STARTED 2026-01-17 01:16:36.903925 | orchestrator | 2026-01-17 01:16:36 | INFO  | Wait 1 second(s) until the next check 2026-01-17 01:16:39.948987 | orchestrator | 2026-01-17 01:16:39 | INFO  | Task ded2d698-a8bd-4275-a3c9-8f83c90a5e3a is in state STARTED 2026-01-17 01:16:39.949080 | orchestrator | 2026-01-17 01:16:39 | INFO  | Wait 1 second(s) until the next check 2026-01-17 01:16:43.007611 | orchestrator | 2026-01-17 01:16:43 | INFO  | Task ded2d698-a8bd-4275-a3c9-8f83c90a5e3a is in state STARTED 2026-01-17 01:16:43.007704 | orchestrator | 2026-01-17 01:16:43 | INFO  | Wait 1 second(s) until the next check 2026-01-17 01:16:46.053609 | orchestrator | 2026-01-17 01:16:46 | INFO  | Task ded2d698-a8bd-4275-a3c9-8f83c90a5e3a is in state STARTED 2026-01-17 01:16:46.053674 | orchestrator | 2026-01-17 01:16:46 | INFO  | Wait 1 second(s) until the next check 2026-01-17 01:16:49.096463 | orchestrator | 2026-01-17 01:16:49 | INFO  | Task ded2d698-a8bd-4275-a3c9-8f83c90a5e3a is in state STARTED 2026-01-17 01:16:49.096530 | orchestrator | 2026-01-17 01:16:49 | INFO  | Wait 1 second(s) until the next check 2026-01-17 01:16:52.145971 | orchestrator | 2026-01-17 01:16:52 | INFO  | Task ded2d698-a8bd-4275-a3c9-8f83c90a5e3a is in state SUCCESS 2026-01-17 01:16:52.147149 | orchestrator | 2026-01-17 01:16:52.147246 | orchestrator | 2026-01-17 01:16:52.147256 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-17 01:16:52.147264 | orchestrator | 2026-01-17 01:16:52.147271 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-01-17 01:16:52.147278 | orchestrator | Saturday 17 January 2026 01:11:48 +0000 (0:00:00.302) 0:00:00.302 ****** 2026-01-17 01:16:52.147284 | orchestrator | ok: [testbed-node-0] 2026-01-17 01:16:52.147291 | orchestrator | ok: [testbed-node-1] 2026-01-17 01:16:52.147298 | orchestrator | ok: [testbed-node-2] 2026-01-17 01:16:52.147304 | orchestrator | 2026-01-17 01:16:52.147311 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-17 01:16:52.147317 | orchestrator | Saturday 17 January 2026 01:11:49 +0000 (0:00:00.363) 0:00:00.665 ****** 2026-01-17 01:16:52.147324 | orchestrator | ok: [testbed-node-0] => (item=enable_octavia_True) 2026-01-17 01:16:52.147331 | orchestrator | ok: [testbed-node-1] => (item=enable_octavia_True) 2026-01-17 01:16:52.147337 | orchestrator | ok: [testbed-node-2] => (item=enable_octavia_True) 2026-01-17 01:16:52.147344 | orchestrator | 2026-01-17 01:16:52.147350 | orchestrator | PLAY [Apply role octavia] ****************************************************** 2026-01-17 01:16:52.147357 | orchestrator | 2026-01-17 01:16:52.147366 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-01-17 01:16:52.147372 | orchestrator | Saturday 17 January 2026 01:11:49 +0000 (0:00:00.547) 0:00:01.213 ****** 2026-01-17 01:16:52.147379 | orchestrator | included: /ansible/roles/octavia/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-17 01:16:52.147386 | orchestrator | 2026-01-17 01:16:52.147392 | orchestrator | TASK [service-ks-register : octavia | Creating services] *********************** 2026-01-17 01:16:52.147399 | orchestrator | Saturday 17 January 2026 01:11:50 +0000 (0:00:00.656) 0:00:01.869 ****** 2026-01-17 01:16:52.147472 | orchestrator | changed: [testbed-node-0] => (item=octavia (load-balancer)) 2026-01-17 01:16:52.147482 | orchestrator | 2026-01-17 01:16:52.147489 | orchestrator | TASK [service-ks-register : octavia | Creating endpoints] ********************** 2026-01-17 01:16:52.147496 | orchestrator | Saturday 17 January 2026 01:11:53 +0000 (0:00:03.319) 0:00:05.189 ****** 2026-01-17 01:16:52.147503 | orchestrator | changed: [testbed-node-0] => (item=octavia -> https://api-int.testbed.osism.xyz:9876 -> internal) 2026-01-17 01:16:52.147511 | orchestrator | changed: [testbed-node-0] => (item=octavia -> https://api.testbed.osism.xyz:9876 -> public) 2026-01-17 01:16:52.147517 | orchestrator | 2026-01-17 01:16:52.147524 | orchestrator | TASK [service-ks-register : octavia | Creating projects] *********************** 2026-01-17 01:16:52.147530 | orchestrator | Saturday 17 January 2026 01:12:01 +0000 (0:00:07.380) 0:00:12.570 ****** 2026-01-17 01:16:52.147537 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-01-17 01:16:52.147544 | orchestrator | 2026-01-17 01:16:52.147551 | orchestrator | TASK [service-ks-register : octavia | Creating users] ************************** 2026-01-17 01:16:52.147557 | orchestrator | Saturday 17 January 2026 01:12:04 +0000 (0:00:03.028) 0:00:15.598 ****** 2026-01-17 01:16:52.147563 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-01-17 01:16:52.147571 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service) 2026-01-17 01:16:52.147578 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service) 2026-01-17 01:16:52.147585 | orchestrator | 2026-01-17 01:16:52.147777 | orchestrator | TASK [service-ks-register : octavia | Creating roles] ************************** 2026-01-17 01:16:52.147866 | orchestrator | Saturday 17 January 2026 01:12:12 +0000 (0:00:08.721) 0:00:24.320 ****** 2026-01-17 01:16:52.147874 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-01-17 01:16:52.147881 | orchestrator | 2026-01-17 01:16:52.147887 | orchestrator | TASK [service-ks-register : octavia | Granting user roles] ********************* 2026-01-17 01:16:52.147900 | orchestrator | Saturday 17 January 2026 01:12:16 +0000 (0:00:03.759) 0:00:28.080 ****** 2026-01-17 01:16:52.147906 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service -> admin) 2026-01-17 01:16:52.147911 | orchestrator | ok: [testbed-node-0] => (item=octavia -> service -> admin) 2026-01-17 01:16:52.147917 | orchestrator | 2026-01-17 01:16:52.147923 | orchestrator | TASK [octavia : Adding octavia related roles] ********************************** 2026-01-17 01:16:52.147929 | orchestrator | Saturday 17 January 2026 01:12:24 +0000 (0:00:08.145) 0:00:36.225 ****** 2026-01-17 01:16:52.147935 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_observer) 2026-01-17 01:16:52.147941 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_global_observer) 2026-01-17 01:16:52.147947 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_member) 2026-01-17 01:16:52.147953 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_admin) 2026-01-17 01:16:52.147958 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_quota_admin) 2026-01-17 01:16:52.147964 | orchestrator | 2026-01-17 01:16:52.147970 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-01-17 01:16:52.147975 | orchestrator | Saturday 17 January 2026 01:12:41 +0000 (0:00:16.799) 0:00:53.025 ****** 2026-01-17 01:16:52.147981 | orchestrator | included: /ansible/roles/octavia/tasks/prepare.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-17 01:16:52.147988 | orchestrator | 2026-01-17 01:16:52.147994 | orchestrator | TASK [octavia : Create amphora flavor] ***************************************** 2026-01-17 01:16:52.148000 | orchestrator | Saturday 17 January 2026 01:12:42 +0000 (0:00:00.572) 0:00:53.598 ****** 2026-01-17 01:16:52.148006 | orchestrator | changed: [testbed-node-0] 2026-01-17 01:16:52.148012 | orchestrator | 2026-01-17 01:16:52.148018 | orchestrator | TASK [octavia : Create nova keypair for amphora] ******************************* 2026-01-17 01:16:52.148024 | orchestrator | Saturday 17 January 2026 01:12:47 +0000 (0:00:05.327) 0:00:58.926 ****** 2026-01-17 01:16:52.148030 | orchestrator | changed: [testbed-node-0] 2026-01-17 01:16:52.148036 | orchestrator | 2026-01-17 01:16:52.148043 | orchestrator | TASK [octavia : Get service project id] **************************************** 2026-01-17 01:16:52.148058 | orchestrator | Saturday 17 January 2026 01:12:53 +0000 (0:00:05.604) 0:01:04.530 ****** 2026-01-17 01:16:52.148065 | orchestrator | ok: [testbed-node-0] 2026-01-17 01:16:52.148071 | orchestrator | 2026-01-17 01:16:52.148078 | orchestrator | TASK [octavia : Create security groups for octavia] **************************** 2026-01-17 01:16:52.148084 | orchestrator | Saturday 17 January 2026 01:12:56 +0000 (0:00:03.530) 0:01:08.060 ****** 2026-01-17 01:16:52.148090 | orchestrator | changed: [testbed-node-0] => (item=lb-mgmt-sec-grp) 2026-01-17 01:16:52.148096 | orchestrator | changed: [testbed-node-0] => (item=lb-health-mgr-sec-grp) 2026-01-17 01:16:52.148103 | orchestrator | 2026-01-17 01:16:52.148109 | orchestrator | TASK [octavia : Add rules for security groups] ********************************* 2026-01-17 01:16:52.148116 | orchestrator | Saturday 17 January 2026 01:13:07 +0000 (0:00:10.776) 0:01:18.837 ****** 2026-01-17 01:16:52.148123 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-mgmt-sec-grp', 'enabled': True}, {'protocol': 'icmp'}]) 2026-01-17 01:16:52.148129 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-mgmt-sec-grp', 'enabled': True}, {'protocol': 'tcp', 'src_port': 22, 'dst_port': 22}]) 2026-01-17 01:16:52.148136 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-mgmt-sec-grp', 'enabled': True}, {'protocol': 'tcp', 'src_port': '9443', 'dst_port': '9443'}]) 2026-01-17 01:16:52.148143 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-health-mgr-sec-grp', 'enabled': True}, {'protocol': 'udp', 'src_port': '5555', 'dst_port': '5555'}]) 2026-01-17 01:16:52.148156 | orchestrator | 2026-01-17 01:16:52.148163 | orchestrator | TASK [octavia : Create loadbalancer management network] ************************ 2026-01-17 01:16:52.148169 | orchestrator | Saturday 17 January 2026 01:13:24 +0000 (0:00:16.553) 0:01:35.390 ****** 2026-01-17 01:16:52.148175 | orchestrator | changed: [testbed-node-0] 2026-01-17 01:16:52.148180 | orchestrator | 2026-01-17 01:16:52.148187 | orchestrator | TASK [octavia : Create loadbalancer management subnet] ************************* 2026-01-17 01:16:52.148194 | orchestrator | Saturday 17 January 2026 01:13:28 +0000 (0:00:04.741) 0:01:40.131 ****** 2026-01-17 01:16:52.148200 | orchestrator | changed: [testbed-node-0] 2026-01-17 01:16:52.148350 | orchestrator | 2026-01-17 01:16:52.148358 | orchestrator | TASK [octavia : Create loadbalancer management router for IPv6] **************** 2026-01-17 01:16:52.148365 | orchestrator | Saturday 17 January 2026 01:13:34 +0000 (0:00:05.562) 0:01:45.694 ****** 2026-01-17 01:16:52.148372 | orchestrator | skipping: [testbed-node-0] 2026-01-17 01:16:52.148378 | orchestrator | 2026-01-17 01:16:52.148384 | orchestrator | TASK [octavia : Update loadbalancer management subnet] ************************* 2026-01-17 01:16:52.148390 | orchestrator | Saturday 17 January 2026 01:13:34 +0000 (0:00:00.222) 0:01:45.916 ****** 2026-01-17 01:16:52.148397 | orchestrator | ok: [testbed-node-0] 2026-01-17 01:16:52.148403 | orchestrator | 2026-01-17 01:16:52.148409 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-01-17 01:16:52.148426 | orchestrator | Saturday 17 January 2026 01:13:39 +0000 (0:00:05.036) 0:01:50.953 ****** 2026-01-17 01:16:52.148433 | orchestrator | included: /ansible/roles/octavia/tasks/hm-interface.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-17 01:16:52.148440 | orchestrator | 2026-01-17 01:16:52.148447 | orchestrator | TASK [octavia : Create ports for Octavia health-manager nodes] ***************** 2026-01-17 01:16:52.148454 | orchestrator | Saturday 17 January 2026 01:13:40 +0000 (0:00:01.069) 0:01:52.022 ****** 2026-01-17 01:16:52.148459 | orchestrator | changed: [testbed-node-0] 2026-01-17 01:16:52.148466 | orchestrator | changed: [testbed-node-2] 2026-01-17 01:16:52.148473 | orchestrator | changed: [testbed-node-1] 2026-01-17 01:16:52.148480 | orchestrator | 2026-01-17 01:16:52.148487 | orchestrator | TASK [octavia : Update Octavia health manager port host_id] ******************** 2026-01-17 01:16:52.148498 | orchestrator | Saturday 17 January 2026 01:13:45 +0000 (0:00:05.273) 0:01:57.296 ****** 2026-01-17 01:16:52.148505 | orchestrator | changed: [testbed-node-0] 2026-01-17 01:16:52.148511 | orchestrator | changed: [testbed-node-2] 2026-01-17 01:16:52.148517 | orchestrator | changed: [testbed-node-1] 2026-01-17 01:16:52.148524 | orchestrator | 2026-01-17 01:16:52.148530 | orchestrator | TASK [octavia : Add Octavia port to openvswitch br-int] ************************ 2026-01-17 01:16:52.148537 | orchestrator | Saturday 17 January 2026 01:13:50 +0000 (0:00:04.321) 0:02:01.617 ****** 2026-01-17 01:16:52.148543 | orchestrator | changed: [testbed-node-0] 2026-01-17 01:16:52.148549 | orchestrator | changed: [testbed-node-1] 2026-01-17 01:16:52.148556 | orchestrator | changed: [testbed-node-2] 2026-01-17 01:16:52.148562 | orchestrator | 2026-01-17 01:16:52.148569 | orchestrator | TASK [octavia : Install isc-dhcp-client package] ******************************* 2026-01-17 01:16:52.148575 | orchestrator | Saturday 17 January 2026 01:13:51 +0000 (0:00:01.028) 0:02:02.646 ****** 2026-01-17 01:16:52.148581 | orchestrator | ok: [testbed-node-1] 2026-01-17 01:16:52.148588 | orchestrator | ok: [testbed-node-0] 2026-01-17 01:16:52.148594 | orchestrator | ok: [testbed-node-2] 2026-01-17 01:16:52.148601 | orchestrator | 2026-01-17 01:16:52.148607 | orchestrator | TASK [octavia : Create octavia dhclient conf] ********************************** 2026-01-17 01:16:52.148614 | orchestrator | Saturday 17 January 2026 01:13:53 +0000 (0:00:02.581) 0:02:05.228 ****** 2026-01-17 01:16:52.148620 | orchestrator | changed: [testbed-node-1] 2026-01-17 01:16:52.148626 | orchestrator | changed: [testbed-node-0] 2026-01-17 01:16:52.148633 | orchestrator | changed: [testbed-node-2] 2026-01-17 01:16:52.148640 | orchestrator | 2026-01-17 01:16:52.148646 | orchestrator | TASK [octavia : Create octavia-interface service] ****************************** 2026-01-17 01:16:52.148659 | orchestrator | Saturday 17 January 2026 01:13:55 +0000 (0:00:01.392) 0:02:06.620 ****** 2026-01-17 01:16:52.148665 | orchestrator | changed: [testbed-node-0] 2026-01-17 01:16:52.148672 | orchestrator | changed: [testbed-node-1] 2026-01-17 01:16:52.148678 | orchestrator | changed: [testbed-node-2] 2026-01-17 01:16:52.148685 | orchestrator | 2026-01-17 01:16:52.148691 | orchestrator | TASK [octavia : Restart octavia-interface.service if required] ***************** 2026-01-17 01:16:52.148698 | orchestrator | Saturday 17 January 2026 01:13:56 +0000 (0:00:01.164) 0:02:07.785 ****** 2026-01-17 01:16:52.148704 | orchestrator | changed: [testbed-node-1] 2026-01-17 01:16:52.148711 | orchestrator | changed: [testbed-node-0] 2026-01-17 01:16:52.148718 | orchestrator | changed: [testbed-node-2] 2026-01-17 01:16:52.148724 | orchestrator | 2026-01-17 01:16:52.148757 | orchestrator | TASK [octavia : Enable and start octavia-interface.service] ******************** 2026-01-17 01:16:52.148762 | orchestrator | Saturday 17 January 2026 01:13:58 +0000 (0:00:01.961) 0:02:09.747 ****** 2026-01-17 01:16:52.148766 | orchestrator | changed: [testbed-node-0] 2026-01-17 01:16:52.148770 | orchestrator | changed: [testbed-node-1] 2026-01-17 01:16:52.148773 | orchestrator | changed: [testbed-node-2] 2026-01-17 01:16:52.148777 | orchestrator | 2026-01-17 01:16:52.148781 | orchestrator | TASK [octavia : Wait for interface ohm0 ip appear] ***************************** 2026-01-17 01:16:52.148784 | orchestrator | Saturday 17 January 2026 01:14:00 +0000 (0:00:01.957) 0:02:11.704 ****** 2026-01-17 01:16:52.148788 | orchestrator | ok: [testbed-node-0] 2026-01-17 01:16:52.148795 | orchestrator | ok: [testbed-node-1] 2026-01-17 01:16:52.148801 | orchestrator | ok: [testbed-node-2] 2026-01-17 01:16:52.148808 | orchestrator | 2026-01-17 01:16:52.148814 | orchestrator | TASK [octavia : Gather facts] ************************************************** 2026-01-17 01:16:52.148821 | orchestrator | Saturday 17 January 2026 01:14:01 +0000 (0:00:00.789) 0:02:12.494 ****** 2026-01-17 01:16:52.148828 | orchestrator | ok: [testbed-node-1] 2026-01-17 01:16:52.148834 | orchestrator | ok: [testbed-node-0] 2026-01-17 01:16:52.148840 | orchestrator | ok: [testbed-node-2] 2026-01-17 01:16:52.148846 | orchestrator | 2026-01-17 01:16:52.148853 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-01-17 01:16:52.148859 | orchestrator | Saturday 17 January 2026 01:14:04 +0000 (0:00:03.423) 0:02:15.917 ****** 2026-01-17 01:16:52.148866 | orchestrator | included: /ansible/roles/octavia/tasks/get_resources_info.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-17 01:16:52.148873 | orchestrator | 2026-01-17 01:16:52.148879 | orchestrator | TASK [octavia : Get amphora flavor info] *************************************** 2026-01-17 01:16:52.148885 | orchestrator | Saturday 17 January 2026 01:14:05 +0000 (0:00:00.804) 0:02:16.721 ****** 2026-01-17 01:16:52.148891 | orchestrator | ok: [testbed-node-0] 2026-01-17 01:16:52.148898 | orchestrator | 2026-01-17 01:16:52.148905 | orchestrator | TASK [octavia : Get service project id] **************************************** 2026-01-17 01:16:52.148911 | orchestrator | Saturday 17 January 2026 01:14:09 +0000 (0:00:03.797) 0:02:20.519 ****** 2026-01-17 01:16:52.148918 | orchestrator | ok: [testbed-node-0] 2026-01-17 01:16:52.148924 | orchestrator | 2026-01-17 01:16:52.148930 | orchestrator | TASK [octavia : Get security groups for octavia] ******************************* 2026-01-17 01:16:52.148937 | orchestrator | Saturday 17 January 2026 01:14:12 +0000 (0:00:02.997) 0:02:23.517 ****** 2026-01-17 01:16:52.148943 | orchestrator | ok: [testbed-node-0] => (item=lb-mgmt-sec-grp) 2026-01-17 01:16:52.148950 | orchestrator | ok: [testbed-node-0] => (item=lb-health-mgr-sec-grp) 2026-01-17 01:16:52.148956 | orchestrator | 2026-01-17 01:16:52.148962 | orchestrator | TASK [octavia : Get loadbalancer management network] *************************** 2026-01-17 01:16:52.148969 | orchestrator | Saturday 17 January 2026 01:14:18 +0000 (0:00:06.682) 0:02:30.200 ****** 2026-01-17 01:16:52.148975 | orchestrator | ok: [testbed-node-0] 2026-01-17 01:16:52.148982 | orchestrator | 2026-01-17 01:16:52.148988 | orchestrator | TASK [octavia : Set octavia resources facts] *********************************** 2026-01-17 01:16:52.148996 | orchestrator | Saturday 17 January 2026 01:14:22 +0000 (0:00:03.272) 0:02:33.473 ****** 2026-01-17 01:16:52.149007 | orchestrator | ok: [testbed-node-0] 2026-01-17 01:16:52.149015 | orchestrator | ok: [testbed-node-1] 2026-01-17 01:16:52.149021 | orchestrator | ok: [testbed-node-2] 2026-01-17 01:16:52.149028 | orchestrator | 2026-01-17 01:16:52.149035 | orchestrator | TASK [octavia : Ensuring config directories exist] ***************************** 2026-01-17 01:16:52.149042 | orchestrator | Saturday 17 January 2026 01:14:22 +0000 (0:00:00.386) 0:02:33.859 ****** 2026-01-17 01:16:52.149055 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-01-17 01:16:52.149085 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-01-17 01:16:52.149093 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-01-17 01:16:52.149101 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-01-17 01:16:52.149109 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-01-17 01:16:52.149121 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-01-17 01:16:52.149136 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-01-17 01:16:52.149144 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-01-17 01:16:52.149167 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-01-17 01:16:52.149175 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-01-17 01:16:52.149182 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-01-17 01:16:52.149194 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-01-17 01:16:52.149203 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-01-17 01:16:52.149210 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-01-17 01:16:52.149217 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-01-17 01:16:52.149224 | orchestrator | 2026-01-17 01:16:52.149230 | orchestrator | TASK [octavia : Check if policies shall be overwritten] ************************ 2026-01-17 01:16:52.149237 | orchestrator | Saturday 17 January 2026 01:14:25 +0000 (0:00:02.507) 0:02:36.367 ****** 2026-01-17 01:16:52.149244 | orchestrator | skipping: [testbed-node-0] 2026-01-17 01:16:52.149250 | orchestrator | 2026-01-17 01:16:52.149274 | orchestrator | TASK [octavia : Set octavia policy file] *************************************** 2026-01-17 01:16:52.149281 | orchestrator | Saturday 17 January 2026 01:14:25 +0000 (0:00:00.151) 0:02:36.519 ****** 2026-01-17 01:16:52.149288 | orchestrator | skipping: [testbed-node-0] 2026-01-17 01:16:52.149294 | orchestrator | skipping: [testbed-node-1] 2026-01-17 01:16:52.149301 | orchestrator | skipping: [testbed-node-2] 2026-01-17 01:16:52.149308 | orchestrator | 2026-01-17 01:16:52.149314 | orchestrator | TASK [octavia : Copying over existing policy file] ***************************** 2026-01-17 01:16:52.149321 | orchestrator | Saturday 17 January 2026 01:14:25 +0000 (0:00:00.547) 0:02:37.066 ****** 2026-01-17 01:16:52.149328 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-01-17 01:16:52.149340 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-01-17 01:16:52.149348 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-01-17 01:16:52.149357 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-01-17 01:16:52.149365 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-01-17 01:16:52.149372 | orchestrator | skipping: [testbed-node-0] 2026-01-17 01:16:52.149394 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-01-17 01:16:52.149401 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-01-17 01:16:52.149412 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-01-17 01:16:52.149463 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-01-17 01:16:52.149472 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-01-17 01:16:52.149479 | orchestrator | skipping: [testbed-node-1] 2026-01-17 01:16:52.149486 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-01-17 01:16:52.149510 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-01-17 01:16:52.149518 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-01-17 01:16:52.149529 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-01-17 01:16:52.149537 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-01-17 01:16:52.149543 | orchestrator | skipping: [testbed-node-2] 2026-01-17 01:16:52.149550 | orchestrator | 2026-01-17 01:16:52.149556 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-01-17 01:16:52.149563 | orchestrator | Saturday 17 January 2026 01:14:26 +0000 (0:00:00.711) 0:02:37.778 ****** 2026-01-17 01:16:52.149569 | orchestrator | included: /ansible/roles/octavia/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-17 01:16:52.149575 | orchestrator | 2026-01-17 01:16:52.149582 | orchestrator | TASK [service-cert-copy : octavia | Copying over extra CA certificates] ******** 2026-01-17 01:16:52.149590 | orchestrator | Saturday 17 January 2026 01:14:27 +0000 (0:00:00.600) 0:02:38.378 ****** 2026-01-17 01:16:52.149597 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-01-17 01:16:52.149620 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'hapr2026-01-17 01:16:52 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-01-17 01:16:52.149629 | orchestrator | oxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-01-17 01:16:52.149640 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-01-17 01:16:52.149647 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-01-17 01:16:52.149654 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-01-17 01:16:52.149663 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-01-17 01:16:52.149669 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-01-17 01:16:52.149690 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-01-17 01:16:52.149697 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-01-17 01:16:52.149709 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-01-17 01:16:52.149716 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-01-17 01:16:52.149723 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-01-17 01:16:52.149732 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-01-17 01:16:52.149739 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-01-17 01:16:52.149748 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-01-17 01:16:52.149759 | orchestrator | 2026-01-17 01:16:52.149765 | orchestrator | TASK [service-cert-copy : octavia | Copying over backend internal TLS certificate] *** 2026-01-17 01:16:52.149771 | orchestrator | Saturday 17 January 2026 01:14:31 +0000 (0:00:04.982) 0:02:43.361 ****** 2026-01-17 01:16:52.149778 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-01-17 01:16:52.149785 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-01-17 01:16:52.149791 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-01-17 01:16:52.149801 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-01-17 01:16:52.149807 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-01-17 01:16:52.149814 | orchestrator | skipping: [testbed-node-1] 2026-01-17 01:16:52.149827 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-01-17 01:16:52.149840 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-01-17 01:16:52.149846 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-01-17 01:16:52.149853 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-01-17 01:16:52.149862 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-01-17 01:16:52.149869 | orchestrator | skipping: [testbed-node-0] 2026-01-17 01:16:52.149876 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-01-17 01:16:52.149890 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-01-17 01:16:52.149897 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-01-17 01:16:52.149903 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-01-17 01:16:52.149910 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-01-17 01:16:52.149916 | orchestrator | skipping: [testbed-node-2] 2026-01-17 01:16:52.149923 | orchestrator | 2026-01-17 01:16:52.149929 | orchestrator | TASK [service-cert-copy : octavia | Copying over backend internal TLS key] ***** 2026-01-17 01:16:52.149935 | orchestrator | Saturday 17 January 2026 01:14:33 +0000 (0:00:01.375) 0:02:44.736 ****** 2026-01-17 01:16:52.149945 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-01-17 01:16:52.149952 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-01-17 01:16:52.149966 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-01-17 01:16:52.149973 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-01-17 01:16:52.149980 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-01-17 01:16:52.149986 | orchestrator | skipping: [testbed-node-0] 2026-01-17 01:16:52.149993 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-01-17 01:16:52.150002 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-01-17 01:16:52.150009 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-01-17 01:16:52.150055 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-01-17 01:16:52.150067 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-01-17 01:16:52.150074 | orchestrator | skipping: [testbed-node-2] 2026-01-17 01:16:52.150081 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-01-17 01:16:52.150088 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-01-17 01:16:52.150098 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-01-17 01:16:52.150106 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-01-17 01:16:52.150116 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-01-17 01:16:52.150123 | orchestrator | skipping: [testbed-node-1] 2026-01-17 01:16:52.150129 | orchestrator | 2026-01-17 01:16:52.150136 | orchestrator | TASK [octavia : Copying over config.json files for services] ******************* 2026-01-17 01:16:52.150141 | orchestrator | Saturday 17 January 2026 01:14:34 +0000 (0:00:01.303) 0:02:46.040 ****** 2026-01-17 01:16:52.150152 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-01-17 01:16:52.150160 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-01-17 01:16:52.150170 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-01-17 01:16:52.150181 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-01-17 01:16:52.150187 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-01-17 01:16:52.150198 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-01-17 01:16:52.150205 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-01-17 01:16:52.150212 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-01-17 01:16:52.150219 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-01-17 01:16:52.150228 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-01-17 01:16:52.150239 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-01-17 01:16:52.150246 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-01-17 01:16:52.150257 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-01-17 01:16:52.150264 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-01-17 01:16:52.150270 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-01-17 01:16:52.150277 | orchestrator | 2026-01-17 01:16:52.150283 | orchestrator | TASK [octavia : Copying over octavia-wsgi.conf] ******************************** 2026-01-17 01:16:52.150290 | orchestrator | Saturday 17 January 2026 01:14:40 +0000 (0:00:05.695) 0:02:51.736 ****** 2026-01-17 01:16:52.150296 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/octavia/templates/octavia-wsgi.conf.j2) 2026-01-17 01:16:52.150304 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/octavia/templates/octavia-wsgi.conf.j2) 2026-01-17 01:16:52.150310 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/octavia/templates/octavia-wsgi.conf.j2) 2026-01-17 01:16:52.150317 | orchestrator | 2026-01-17 01:16:52.150323 | orchestrator | TASK [octavia : Copying over octavia.conf] ************************************* 2026-01-17 01:16:52.150334 | orchestrator | Saturday 17 January 2026 01:14:42 +0000 (0:00:01.812) 0:02:53.548 ****** 2026-01-17 01:16:52.150344 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-01-17 01:16:52.150351 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-01-17 01:16:52.150361 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-01-17 01:16:52.150368 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-01-17 01:16:52.150375 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-01-17 01:16:52.150381 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-01-17 01:16:52.150397 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-01-17 01:16:52.150404 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-01-17 01:16:52.150444 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-01-17 01:16:52.150454 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-01-17 01:16:52.150461 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-01-17 01:16:52.150467 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-01-17 01:16:52.150478 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-01-17 01:16:52.150488 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-01-17 01:16:52.150495 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-01-17 01:16:52.150501 | orchestrator | 2026-01-17 01:16:52.150508 | orchestrator | TASK [octavia : Copying over Octavia SSH key] ********************************** 2026-01-17 01:16:52.150514 | orchestrator | Saturday 17 January 2026 01:15:02 +0000 (0:00:20.262) 0:03:13.811 ****** 2026-01-17 01:16:52.150521 | orchestrator | changed: [testbed-node-0] 2026-01-17 01:16:52.150527 | orchestrator | changed: [testbed-node-1] 2026-01-17 01:16:52.150533 | orchestrator | changed: [testbed-node-2] 2026-01-17 01:16:52.150539 | orchestrator | 2026-01-17 01:16:52.150546 | orchestrator | TASK [octavia : Copying certificate files for octavia-worker] ****************** 2026-01-17 01:16:52.150552 | orchestrator | Saturday 17 January 2026 01:15:04 +0000 (0:00:01.571) 0:03:15.383 ****** 2026-01-17 01:16:52.150562 | orchestrator | changed: [testbed-node-0] => (item=client.cert-and-key.pem) 2026-01-17 01:16:52.150569 | orchestrator | changed: [testbed-node-1] => (item=client.cert-and-key.pem) 2026-01-17 01:16:52.150575 | orchestrator | changed: [testbed-node-2] => (item=client.cert-and-key.pem) 2026-01-17 01:16:52.150581 | orchestrator | changed: [testbed-node-1] => (item=client_ca.cert.pem) 2026-01-17 01:16:52.150588 | orchestrator | changed: [testbed-node-0] => (item=client_ca.cert.pem) 2026-01-17 01:16:52.150594 | orchestrator | changed: [testbed-node-2] => (item=client_ca.cert.pem) 2026-01-17 01:16:52.150601 | orchestrator | changed: [testbed-node-1] => (item=server_ca.cert.pem) 2026-01-17 01:16:52.150607 | orchestrator | changed: [testbed-node-0] => (item=server_ca.cert.pem) 2026-01-17 01:16:52.150614 | orchestrator | changed: [testbed-node-2] => (item=server_ca.cert.pem) 2026-01-17 01:16:52.150620 | orchestrator | changed: [testbed-node-0] => (item=server_ca.key.pem) 2026-01-17 01:16:52.150626 | orchestrator | changed: [testbed-node-1] => (item=server_ca.key.pem) 2026-01-17 01:16:52.150633 | orchestrator | changed: [testbed-node-2] => (item=server_ca.key.pem) 2026-01-17 01:16:52.150639 | orchestrator | 2026-01-17 01:16:52.150645 | orchestrator | TASK [octavia : Copying certificate files for octavia-housekeeping] ************ 2026-01-17 01:16:52.150652 | orchestrator | Saturday 17 January 2026 01:15:09 +0000 (0:00:05.309) 0:03:20.692 ****** 2026-01-17 01:16:52.150663 | orchestrator | changed: [testbed-node-0] => (item=client.cert-and-key.pem) 2026-01-17 01:16:52.150669 | orchestrator | changed: [testbed-node-1] => (item=client.cert-and-key.pem) 2026-01-17 01:16:52.150675 | orchestrator | changed: [testbed-node-2] => (item=client.cert-and-key.pem) 2026-01-17 01:16:52.150682 | orchestrator | changed: [testbed-node-0] => (item=client_ca.cert.pem) 2026-01-17 01:16:52.150688 | orchestrator | changed: [testbed-node-1] => (item=client_ca.cert.pem) 2026-01-17 01:16:52.150694 | orchestrator | changed: [testbed-node-2] => (item=client_ca.cert.pem) 2026-01-17 01:16:52.150701 | orchestrator | changed: [testbed-node-0] => (item=server_ca.cert.pem) 2026-01-17 01:16:52.150707 | orchestrator | changed: [testbed-node-1] => (item=server_ca.cert.pem) 2026-01-17 01:16:52.150713 | orchestrator | changed: [testbed-node-2] => (item=server_ca.cert.pem) 2026-01-17 01:16:52.150719 | orchestrator | changed: [testbed-node-0] => (item=server_ca.key.pem) 2026-01-17 01:16:52.150725 | orchestrator | changed: [testbed-node-1] => (item=server_ca.key.pem) 2026-01-17 01:16:52.150732 | orchestrator | changed: [testbed-node-2] => (item=server_ca.key.pem) 2026-01-17 01:16:52.150738 | orchestrator | 2026-01-17 01:16:52.150745 | orchestrator | TASK [octavia : Copying certificate files for octavia-health-manager] ********** 2026-01-17 01:16:52.150751 | orchestrator | Saturday 17 January 2026 01:15:15 +0000 (0:00:06.270) 0:03:26.963 ****** 2026-01-17 01:16:52.150757 | orchestrator | changed: [testbed-node-0] => (item=client.cert-and-key.pem) 2026-01-17 01:16:52.150764 | orchestrator | changed: [testbed-node-1] => (item=client.cert-and-key.pem) 2026-01-17 01:16:52.150770 | orchestrator | changed: [testbed-node-2] => (item=client.cert-and-key.pem) 2026-01-17 01:16:52.150776 | orchestrator | changed: [testbed-node-0] => (item=client_ca.cert.pem) 2026-01-17 01:16:52.150783 | orchestrator | changed: [testbed-node-1] => (item=client_ca.cert.pem) 2026-01-17 01:16:52.150789 | orchestrator | changed: [testbed-node-2] => (item=client_ca.cert.pem) 2026-01-17 01:16:52.150795 | orchestrator | changed: [testbed-node-0] => (item=server_ca.cert.pem) 2026-01-17 01:16:52.150801 | orchestrator | changed: [testbed-node-1] => (item=server_ca.cert.pem) 2026-01-17 01:16:52.150807 | orchestrator | changed: [testbed-node-2] => (item=server_ca.cert.pem) 2026-01-17 01:16:52.150813 | orchestrator | changed: [testbed-node-0] => (item=server_ca.key.pem) 2026-01-17 01:16:52.150823 | orchestrator | changed: [testbed-node-1] => (item=server_ca.key.pem) 2026-01-17 01:16:52.150830 | orchestrator | changed: [testbed-node-2] => (item=server_ca.key.pem) 2026-01-17 01:16:52.150836 | orchestrator | 2026-01-17 01:16:52.150842 | orchestrator | TASK [octavia : Check octavia containers] ************************************** 2026-01-17 01:16:52.150849 | orchestrator | Saturday 17 January 2026 01:15:21 +0000 (0:00:05.919) 0:03:32.883 ****** 2026-01-17 01:16:52.150855 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-01-17 01:16:52.150865 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-01-17 01:16:52.150878 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-01-17 01:16:52.150885 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-01-17 01:16:52.150892 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-01-17 01:16:52.150901 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-01-17 01:16:52.150908 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-01-17 01:16:52.150918 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-01-17 01:16:52.150929 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-01-17 01:16:52.150936 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-01-17 01:16:52.150942 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-01-17 01:16:52.150951 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-01-17 01:16:52.150958 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-01-17 01:16:52.150965 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-01-17 01:16:52.150979 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-01-17 01:16:52.150986 | orchestrator | 2026-01-17 01:16:52.150993 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-01-17 01:16:52.150999 | orchestrator | Saturday 17 January 2026 01:15:26 +0000 (0:00:05.243) 0:03:38.127 ****** 2026-01-17 01:16:52.151006 | orchestrator | skipping: [testbed-node-0] 2026-01-17 01:16:52.151012 | orchestrator | skipping: [testbed-node-1] 2026-01-17 01:16:52.151018 | orchestrator | skipping: [testbed-node-2] 2026-01-17 01:16:52.151025 | orchestrator | 2026-01-17 01:16:52.151030 | orchestrator | TASK [octavia : Creating Octavia database] ************************************* 2026-01-17 01:16:52.151036 | orchestrator | Saturday 17 January 2026 01:15:27 +0000 (0:00:00.291) 0:03:38.418 ****** 2026-01-17 01:16:52.151043 | orchestrator | changed: [testbed-node-0] 2026-01-17 01:16:52.151049 | orchestrator | 2026-01-17 01:16:52.151055 | orchestrator | TASK [octavia : Creating Octavia persistence database] ************************* 2026-01-17 01:16:52.151062 | orchestrator | Saturday 17 January 2026 01:15:29 +0000 (0:00:02.306) 0:03:40.725 ****** 2026-01-17 01:16:52.151068 | orchestrator | changed: [testbed-node-0] 2026-01-17 01:16:52.151075 | orchestrator | 2026-01-17 01:16:52.151081 | orchestrator | TASK [octavia : Creating Octavia database user and setting permissions] ******** 2026-01-17 01:16:52.151087 | orchestrator | Saturday 17 January 2026 01:15:31 +0000 (0:00:02.567) 0:03:43.293 ****** 2026-01-17 01:16:52.151094 | orchestrator | changed: [testbed-node-0] 2026-01-17 01:16:52.151100 | orchestrator | 2026-01-17 01:16:52.151106 | orchestrator | TASK [octavia : Creating Octavia persistence database user and setting permissions] *** 2026-01-17 01:16:52.151113 | orchestrator | Saturday 17 January 2026 01:15:34 +0000 (0:00:02.446) 0:03:45.739 ****** 2026-01-17 01:16:52.151119 | orchestrator | changed: [testbed-node-0] 2026-01-17 01:16:52.151125 | orchestrator | 2026-01-17 01:16:52.151132 | orchestrator | TASK [octavia : Running Octavia bootstrap container] *************************** 2026-01-17 01:16:52.151138 | orchestrator | Saturday 17 January 2026 01:15:37 +0000 (0:00:03.035) 0:03:48.775 ****** 2026-01-17 01:16:52.151144 | orchestrator | changed: [testbed-node-0] 2026-01-17 01:16:52.151150 | orchestrator | 2026-01-17 01:16:52.151157 | orchestrator | TASK [octavia : Flush handlers] ************************************************ 2026-01-17 01:16:52.151164 | orchestrator | Saturday 17 January 2026 01:15:56 +0000 (0:00:18.780) 0:04:07.556 ****** 2026-01-17 01:16:52.151170 | orchestrator | 2026-01-17 01:16:52.151176 | orchestrator | TASK [octavia : Flush handlers] ************************************************ 2026-01-17 01:16:52.151182 | orchestrator | Saturday 17 January 2026 01:15:56 +0000 (0:00:00.087) 0:04:07.643 ****** 2026-01-17 01:16:52.151188 | orchestrator | 2026-01-17 01:16:52.151195 | orchestrator | TASK [octavia : Flush handlers] ************************************************ 2026-01-17 01:16:52.151201 | orchestrator | Saturday 17 January 2026 01:15:56 +0000 (0:00:00.073) 0:04:07.716 ****** 2026-01-17 01:16:52.151208 | orchestrator | 2026-01-17 01:16:52.151214 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-api container] ********************** 2026-01-17 01:16:52.151220 | orchestrator | Saturday 17 January 2026 01:15:56 +0000 (0:00:00.072) 0:04:07.789 ****** 2026-01-17 01:16:52.151226 | orchestrator | changed: [testbed-node-0] 2026-01-17 01:16:52.151232 | orchestrator | changed: [testbed-node-2] 2026-01-17 01:16:52.151239 | orchestrator | changed: [testbed-node-1] 2026-01-17 01:16:52.151246 | orchestrator | 2026-01-17 01:16:52.151252 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-driver-agent container] ************* 2026-01-17 01:16:52.151264 | orchestrator | Saturday 17 January 2026 01:16:10 +0000 (0:00:14.380) 0:04:22.169 ****** 2026-01-17 01:16:52.151270 | orchestrator | changed: [testbed-node-0] 2026-01-17 01:16:52.151279 | orchestrator | changed: [testbed-node-2] 2026-01-17 01:16:52.151285 | orchestrator | changed: [testbed-node-1] 2026-01-17 01:16:52.151291 | orchestrator | 2026-01-17 01:16:52.151298 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-health-manager container] *********** 2026-01-17 01:16:52.151304 | orchestrator | Saturday 17 January 2026 01:16:21 +0000 (0:00:10.803) 0:04:32.973 ****** 2026-01-17 01:16:52.151310 | orchestrator | changed: [testbed-node-0] 2026-01-17 01:16:52.151316 | orchestrator | changed: [testbed-node-2] 2026-01-17 01:16:52.151323 | orchestrator | changed: [testbed-node-1] 2026-01-17 01:16:52.151329 | orchestrator | 2026-01-17 01:16:52.151336 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-housekeeping container] ************* 2026-01-17 01:16:52.151342 | orchestrator | Saturday 17 January 2026 01:16:31 +0000 (0:00:10.100) 0:04:43.074 ****** 2026-01-17 01:16:52.151348 | orchestrator | changed: [testbed-node-1] 2026-01-17 01:16:52.151354 | orchestrator | changed: [testbed-node-2] 2026-01-17 01:16:52.151361 | orchestrator | changed: [testbed-node-0] 2026-01-17 01:16:52.151367 | orchestrator | 2026-01-17 01:16:52.151373 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-worker container] ******************* 2026-01-17 01:16:52.151379 | orchestrator | Saturday 17 January 2026 01:16:40 +0000 (0:00:08.559) 0:04:51.633 ****** 2026-01-17 01:16:52.151385 | orchestrator | changed: [testbed-node-1] 2026-01-17 01:16:52.151392 | orchestrator | changed: [testbed-node-0] 2026-01-17 01:16:52.151398 | orchestrator | changed: [testbed-node-2] 2026-01-17 01:16:52.151404 | orchestrator | 2026-01-17 01:16:52.151410 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-17 01:16:52.151428 | orchestrator | testbed-node-0 : ok=57  changed=38  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-01-17 01:16:52.151435 | orchestrator | testbed-node-1 : ok=33  changed=22  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-01-17 01:16:52.151441 | orchestrator | testbed-node-2 : ok=33  changed=22  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-01-17 01:16:52.151448 | orchestrator | 2026-01-17 01:16:52.151454 | orchestrator | 2026-01-17 01:16:52.151460 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-17 01:16:52.151470 | orchestrator | Saturday 17 January 2026 01:16:50 +0000 (0:00:10.514) 0:05:02.148 ****** 2026-01-17 01:16:52.151477 | orchestrator | =============================================================================== 2026-01-17 01:16:52.151483 | orchestrator | octavia : Copying over octavia.conf ------------------------------------ 20.26s 2026-01-17 01:16:52.151489 | orchestrator | octavia : Running Octavia bootstrap container -------------------------- 18.78s 2026-01-17 01:16:52.151496 | orchestrator | octavia : Adding octavia related roles --------------------------------- 16.80s 2026-01-17 01:16:52.151502 | orchestrator | octavia : Add rules for security groups -------------------------------- 16.55s 2026-01-17 01:16:52.151508 | orchestrator | octavia : Restart octavia-api container -------------------------------- 14.38s 2026-01-17 01:16:52.151515 | orchestrator | octavia : Restart octavia-driver-agent container ----------------------- 10.80s 2026-01-17 01:16:52.151521 | orchestrator | octavia : Create security groups for octavia --------------------------- 10.78s 2026-01-17 01:16:52.151527 | orchestrator | octavia : Restart octavia-worker container ----------------------------- 10.51s 2026-01-17 01:16:52.151533 | orchestrator | octavia : Restart octavia-health-manager container --------------------- 10.10s 2026-01-17 01:16:52.151539 | orchestrator | service-ks-register : octavia | Creating users -------------------------- 8.72s 2026-01-17 01:16:52.151545 | orchestrator | octavia : Restart octavia-housekeeping container ------------------------ 8.56s 2026-01-17 01:16:52.151552 | orchestrator | service-ks-register : octavia | Granting user roles --------------------- 8.15s 2026-01-17 01:16:52.151563 | orchestrator | service-ks-register : octavia | Creating endpoints ---------------------- 7.38s 2026-01-17 01:16:52.151569 | orchestrator | octavia : Get security groups for octavia ------------------------------- 6.68s 2026-01-17 01:16:52.151576 | orchestrator | octavia : Copying certificate files for octavia-housekeeping ------------ 6.27s 2026-01-17 01:16:52.151582 | orchestrator | octavia : Copying certificate files for octavia-health-manager ---------- 5.92s 2026-01-17 01:16:52.151589 | orchestrator | octavia : Copying over config.json files for services ------------------- 5.70s 2026-01-17 01:16:52.151595 | orchestrator | octavia : Create nova keypair for amphora ------------------------------- 5.60s 2026-01-17 01:16:52.151601 | orchestrator | octavia : Create loadbalancer management subnet ------------------------- 5.56s 2026-01-17 01:16:52.151607 | orchestrator | octavia : Create amphora flavor ----------------------------------------- 5.33s 2026-01-17 01:16:55.196821 | orchestrator | 2026-01-17 01:16:55 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-01-17 01:16:58.236391 | orchestrator | 2026-01-17 01:16:58 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-01-17 01:17:01.279253 | orchestrator | 2026-01-17 01:17:01 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-01-17 01:17:04.318977 | orchestrator | 2026-01-17 01:17:04 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-01-17 01:17:07.356812 | orchestrator | 2026-01-17 01:17:07 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-01-17 01:17:10.398258 | orchestrator | 2026-01-17 01:17:10 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-01-17 01:17:13.442679 | orchestrator | 2026-01-17 01:17:13 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-01-17 01:17:16.486808 | orchestrator | 2026-01-17 01:17:16 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-01-17 01:17:19.538253 | orchestrator | 2026-01-17 01:17:19 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-01-17 01:17:22.580889 | orchestrator | 2026-01-17 01:17:22 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-01-17 01:17:25.627047 | orchestrator | 2026-01-17 01:17:25 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-01-17 01:17:28.670319 | orchestrator | 2026-01-17 01:17:28 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-01-17 01:17:31.715928 | orchestrator | 2026-01-17 01:17:31 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-01-17 01:17:34.759784 | orchestrator | 2026-01-17 01:17:34 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-01-17 01:17:37.799767 | orchestrator | 2026-01-17 01:17:37 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-01-17 01:17:40.846721 | orchestrator | 2026-01-17 01:17:40 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-01-17 01:17:43.887804 | orchestrator | 2026-01-17 01:17:43 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-01-17 01:17:46.926901 | orchestrator | 2026-01-17 01:17:46 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-01-17 01:17:49.971956 | orchestrator | 2026-01-17 01:17:49 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-01-17 01:17:53.017723 | orchestrator | 2026-01-17 01:17:53.435151 | orchestrator | 2026-01-17 01:17:53.442181 | orchestrator | --> DEPLOY IN A NUTSHELL -- END -- Sat Jan 17 01:17:53 UTC 2026 2026-01-17 01:17:53.442400 | orchestrator | 2026-01-17 01:17:53.889125 | orchestrator | ok: Runtime: 0:35:57.121960 2026-01-17 01:17:54.145252 | 2026-01-17 01:17:54.145399 | TASK [Bootstrap services] 2026-01-17 01:17:54.893990 | orchestrator | 2026-01-17 01:17:54.894183 | orchestrator | # BOOTSTRAP 2026-01-17 01:17:54.894196 | orchestrator | 2026-01-17 01:17:54.894204 | orchestrator | + set -e 2026-01-17 01:17:54.894212 | orchestrator | + echo 2026-01-17 01:17:54.894221 | orchestrator | + echo '# BOOTSTRAP' 2026-01-17 01:17:54.894232 | orchestrator | + echo 2026-01-17 01:17:54.894261 | orchestrator | + sh -c /opt/configuration/scripts/bootstrap-services.sh 2026-01-17 01:17:54.903739 | orchestrator | + set -e 2026-01-17 01:17:54.903814 | orchestrator | + sh -c /opt/configuration/scripts/bootstrap/300-openstack.sh 2026-01-17 01:18:00.118082 | orchestrator | 2026-01-17 01:18:00 | INFO  | It takes a moment until task 2e5e3463-023f-44ed-a9d1-a271294661c5 (flavor-manager) has been started and output is visible here. 2026-01-17 01:18:08.214153 | orchestrator | 2026-01-17 01:18:03 | INFO  | Flavor SCS-1L-1 created 2026-01-17 01:18:08.214274 | orchestrator | 2026-01-17 01:18:03 | INFO  | Flavor SCS-1L-1-5 created 2026-01-17 01:18:08.214300 | orchestrator | 2026-01-17 01:18:04 | INFO  | Flavor SCS-1V-2 created 2026-01-17 01:18:08.214319 | orchestrator | 2026-01-17 01:18:04 | INFO  | Flavor SCS-1V-2-5 created 2026-01-17 01:18:08.214336 | orchestrator | 2026-01-17 01:18:04 | INFO  | Flavor SCS-1V-4 created 2026-01-17 01:18:08.214354 | orchestrator | 2026-01-17 01:18:04 | INFO  | Flavor SCS-1V-4-10 created 2026-01-17 01:18:08.214371 | orchestrator | 2026-01-17 01:18:05 | INFO  | Flavor SCS-1V-8 created 2026-01-17 01:18:08.214389 | orchestrator | 2026-01-17 01:18:05 | INFO  | Flavor SCS-1V-8-20 created 2026-01-17 01:18:08.214421 | orchestrator | 2026-01-17 01:18:05 | INFO  | Flavor SCS-2V-4 created 2026-01-17 01:18:08.214439 | orchestrator | 2026-01-17 01:18:05 | INFO  | Flavor SCS-2V-4-10 created 2026-01-17 01:18:08.214457 | orchestrator | 2026-01-17 01:18:05 | INFO  | Flavor SCS-2V-8 created 2026-01-17 01:18:08.214474 | orchestrator | 2026-01-17 01:18:05 | INFO  | Flavor SCS-2V-8-20 created 2026-01-17 01:18:08.214491 | orchestrator | 2026-01-17 01:18:06 | INFO  | Flavor SCS-2V-16 created 2026-01-17 01:18:08.214508 | orchestrator | 2026-01-17 01:18:06 | INFO  | Flavor SCS-2V-16-50 created 2026-01-17 01:18:08.214525 | orchestrator | 2026-01-17 01:18:06 | INFO  | Flavor SCS-4V-8 created 2026-01-17 01:18:08.214543 | orchestrator | 2026-01-17 01:18:06 | INFO  | Flavor SCS-4V-8-20 created 2026-01-17 01:18:08.214560 | orchestrator | 2026-01-17 01:18:06 | INFO  | Flavor SCS-4V-16 created 2026-01-17 01:18:08.214577 | orchestrator | 2026-01-17 01:18:06 | INFO  | Flavor SCS-4V-16-50 created 2026-01-17 01:18:08.214594 | orchestrator | 2026-01-17 01:18:06 | INFO  | Flavor SCS-4V-32 created 2026-01-17 01:18:08.214611 | orchestrator | 2026-01-17 01:18:06 | INFO  | Flavor SCS-4V-32-100 created 2026-01-17 01:18:08.214628 | orchestrator | 2026-01-17 01:18:07 | INFO  | Flavor SCS-8V-16 created 2026-01-17 01:18:08.214727 | orchestrator | 2026-01-17 01:18:07 | INFO  | Flavor SCS-8V-16-50 created 2026-01-17 01:18:08.214746 | orchestrator | 2026-01-17 01:18:07 | INFO  | Flavor SCS-8V-32 created 2026-01-17 01:18:08.214763 | orchestrator | 2026-01-17 01:18:07 | INFO  | Flavor SCS-8V-32-100 created 2026-01-17 01:18:08.214780 | orchestrator | 2026-01-17 01:18:07 | INFO  | Flavor SCS-16V-32 created 2026-01-17 01:18:08.214797 | orchestrator | 2026-01-17 01:18:07 | INFO  | Flavor SCS-16V-32-100 created 2026-01-17 01:18:08.214815 | orchestrator | 2026-01-17 01:18:07 | INFO  | Flavor SCS-2V-4-20s created 2026-01-17 01:18:08.214831 | orchestrator | 2026-01-17 01:18:07 | INFO  | Flavor SCS-4V-8-50s created 2026-01-17 01:18:08.214848 | orchestrator | 2026-01-17 01:18:07 | INFO  | Flavor SCS-8V-32-100s created 2026-01-17 01:18:10.781123 | orchestrator | 2026-01-17 01:18:10 | INFO  | Trying to run play bootstrap-basic in environment openstack 2026-01-17 01:18:20.879370 | orchestrator | 2026-01-17 01:18:20 | INFO  | Task 7903041f-be22-4959-b6bd-80ef408d8ad8 (bootstrap-basic) was prepared for execution. 2026-01-17 01:18:20.879454 | orchestrator | 2026-01-17 01:18:20 | INFO  | It takes a moment until task 7903041f-be22-4959-b6bd-80ef408d8ad8 (bootstrap-basic) has been started and output is visible here. 2026-01-17 01:19:07.461871 | orchestrator | 2026-01-17 01:19:07.461958 | orchestrator | PLAY [Bootstrap basic OpenStack services] ************************************** 2026-01-17 01:19:07.461970 | orchestrator | 2026-01-17 01:19:07.461978 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-01-17 01:19:07.461985 | orchestrator | Saturday 17 January 2026 01:18:25 +0000 (0:00:00.093) 0:00:00.093 ****** 2026-01-17 01:19:07.461993 | orchestrator | ok: [localhost] 2026-01-17 01:19:07.462001 | orchestrator | 2026-01-17 01:19:07.462007 | orchestrator | TASK [Get volume type LUKS] **************************************************** 2026-01-17 01:19:07.462047 | orchestrator | Saturday 17 January 2026 01:18:27 +0000 (0:00:01.941) 0:00:02.035 ****** 2026-01-17 01:19:07.462054 | orchestrator | ok: [localhost] 2026-01-17 01:19:07.462061 | orchestrator | 2026-01-17 01:19:07.462068 | orchestrator | TASK [Create volume type LUKS] ************************************************* 2026-01-17 01:19:07.462075 | orchestrator | Saturday 17 January 2026 01:18:36 +0000 (0:00:09.041) 0:00:11.077 ****** 2026-01-17 01:19:07.462081 | orchestrator | changed: [localhost] 2026-01-17 01:19:07.462089 | orchestrator | 2026-01-17 01:19:07.462095 | orchestrator | TASK [Create public network] *************************************************** 2026-01-17 01:19:07.462099 | orchestrator | Saturday 17 January 2026 01:18:44 +0000 (0:00:08.090) 0:00:19.168 ****** 2026-01-17 01:19:07.462103 | orchestrator | changed: [localhost] 2026-01-17 01:19:07.462107 | orchestrator | 2026-01-17 01:19:07.462111 | orchestrator | TASK [Set public network to default] ******************************************* 2026-01-17 01:19:07.462115 | orchestrator | Saturday 17 January 2026 01:18:49 +0000 (0:00:05.291) 0:00:24.459 ****** 2026-01-17 01:19:07.462122 | orchestrator | changed: [localhost] 2026-01-17 01:19:07.462126 | orchestrator | 2026-01-17 01:19:07.462130 | orchestrator | TASK [Create public subnet] **************************************************** 2026-01-17 01:19:07.462135 | orchestrator | Saturday 17 January 2026 01:18:56 +0000 (0:00:06.242) 0:00:30.702 ****** 2026-01-17 01:19:07.462138 | orchestrator | changed: [localhost] 2026-01-17 01:19:07.462142 | orchestrator | 2026-01-17 01:19:07.462146 | orchestrator | TASK [Create default IPv4 subnet pool] ***************************************** 2026-01-17 01:19:07.462150 | orchestrator | Saturday 17 January 2026 01:19:00 +0000 (0:00:04.364) 0:00:35.067 ****** 2026-01-17 01:19:07.462154 | orchestrator | changed: [localhost] 2026-01-17 01:19:07.462157 | orchestrator | 2026-01-17 01:19:07.462161 | orchestrator | TASK [Create manager role] ***************************************************** 2026-01-17 01:19:07.462172 | orchestrator | Saturday 17 January 2026 01:19:04 +0000 (0:00:03.448) 0:00:38.516 ****** 2026-01-17 01:19:07.462176 | orchestrator | ok: [localhost] 2026-01-17 01:19:07.462180 | orchestrator | 2026-01-17 01:19:07.462184 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-17 01:19:07.462188 | orchestrator | localhost : ok=8  changed=5  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-17 01:19:07.462193 | orchestrator | 2026-01-17 01:19:07.462196 | orchestrator | 2026-01-17 01:19:07.462200 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-17 01:19:07.462204 | orchestrator | Saturday 17 January 2026 01:19:07 +0000 (0:00:03.272) 0:00:41.788 ****** 2026-01-17 01:19:07.462208 | orchestrator | =============================================================================== 2026-01-17 01:19:07.462211 | orchestrator | Get volume type LUKS ---------------------------------------------------- 9.04s 2026-01-17 01:19:07.462215 | orchestrator | Create volume type LUKS ------------------------------------------------- 8.09s 2026-01-17 01:19:07.462219 | orchestrator | Set public network to default ------------------------------------------- 6.24s 2026-01-17 01:19:07.462223 | orchestrator | Create public network --------------------------------------------------- 5.29s 2026-01-17 01:19:07.462244 | orchestrator | Create public subnet ---------------------------------------------------- 4.36s 2026-01-17 01:19:07.462248 | orchestrator | Create default IPv4 subnet pool ----------------------------------------- 3.45s 2026-01-17 01:19:07.462252 | orchestrator | Create manager role ----------------------------------------------------- 3.27s 2026-01-17 01:19:07.462256 | orchestrator | Gathering Facts --------------------------------------------------------- 1.94s 2026-01-17 01:19:09.618675 | orchestrator | 2026-01-17 01:19:09 | INFO  | It takes a moment until task 08ce0f6d-23f4-4af4-93aa-b8bae8bea51d (image-manager) has been started and output is visible here. 2026-01-17 01:19:49.621830 | orchestrator | 2026-01-17 01:19:12 | INFO  | Processing image 'Cirros 0.6.2' 2026-01-17 01:19:49.621975 | orchestrator | 2026-01-17 01:19:12 | INFO  | Tested URL https://github.com/cirros-dev/cirros/releases/download/0.6.2/cirros-0.6.2-x86_64-disk.img: 302 2026-01-17 01:19:49.621990 | orchestrator | 2026-01-17 01:19:12 | INFO  | Importing image Cirros 0.6.2 2026-01-17 01:19:49.621997 | orchestrator | 2026-01-17 01:19:12 | INFO  | Importing from URL https://github.com/cirros-dev/cirros/releases/download/0.6.2/cirros-0.6.2-x86_64-disk.img 2026-01-17 01:19:49.622006 | orchestrator | 2026-01-17 01:19:14 | INFO  | Waiting for image to leave queued state... 2026-01-17 01:19:49.622066 | orchestrator | 2026-01-17 01:19:16 | INFO  | Waiting for import to complete... 2026-01-17 01:19:49.622074 | orchestrator | 2026-01-17 01:19:26 | INFO  | Import of 'Cirros 0.6.2' successfully completed, reloading images 2026-01-17 01:19:49.622083 | orchestrator | 2026-01-17 01:19:27 | INFO  | Checking parameters of 'Cirros 0.6.2' 2026-01-17 01:19:49.622090 | orchestrator | 2026-01-17 01:19:27 | INFO  | Setting internal_version = 0.6.2 2026-01-17 01:19:49.622097 | orchestrator | 2026-01-17 01:19:27 | INFO  | Setting image_original_user = cirros 2026-01-17 01:19:49.622106 | orchestrator | 2026-01-17 01:19:27 | INFO  | Adding tag os:cirros 2026-01-17 01:19:49.622113 | orchestrator | 2026-01-17 01:19:27 | INFO  | Setting property architecture: x86_64 2026-01-17 01:19:49.622120 | orchestrator | 2026-01-17 01:19:27 | INFO  | Setting property hw_disk_bus: scsi 2026-01-17 01:19:49.622127 | orchestrator | 2026-01-17 01:19:27 | INFO  | Setting property hw_rng_model: virtio 2026-01-17 01:19:49.622134 | orchestrator | 2026-01-17 01:19:28 | INFO  | Setting property hw_scsi_model: virtio-scsi 2026-01-17 01:19:49.622140 | orchestrator | 2026-01-17 01:19:28 | INFO  | Setting property hw_watchdog_action: reset 2026-01-17 01:19:49.622147 | orchestrator | 2026-01-17 01:19:28 | INFO  | Setting property hypervisor_type: qemu 2026-01-17 01:19:49.622154 | orchestrator | 2026-01-17 01:19:28 | INFO  | Setting property os_distro: cirros 2026-01-17 01:19:49.622160 | orchestrator | 2026-01-17 01:19:28 | INFO  | Setting property os_purpose: minimal 2026-01-17 01:19:49.622166 | orchestrator | 2026-01-17 01:19:29 | INFO  | Setting property replace_frequency: never 2026-01-17 01:19:49.622173 | orchestrator | 2026-01-17 01:19:29 | INFO  | Setting property uuid_validity: none 2026-01-17 01:19:49.622179 | orchestrator | 2026-01-17 01:19:29 | INFO  | Setting property provided_until: none 2026-01-17 01:19:49.622185 | orchestrator | 2026-01-17 01:19:29 | INFO  | Setting property image_description: Cirros 2026-01-17 01:19:49.622192 | orchestrator | 2026-01-17 01:19:30 | INFO  | Setting property image_name: Cirros 2026-01-17 01:19:49.622198 | orchestrator | 2026-01-17 01:19:30 | INFO  | Setting property internal_version: 0.6.2 2026-01-17 01:19:49.622205 | orchestrator | 2026-01-17 01:19:30 | INFO  | Setting property image_original_user: cirros 2026-01-17 01:19:49.622233 | orchestrator | 2026-01-17 01:19:30 | INFO  | Setting property os_version: 0.6.2 2026-01-17 01:19:49.622248 | orchestrator | 2026-01-17 01:19:30 | INFO  | Setting property image_source: https://github.com/cirros-dev/cirros/releases/download/0.6.2/cirros-0.6.2-x86_64-disk.img 2026-01-17 01:19:49.622256 | orchestrator | 2026-01-17 01:19:31 | INFO  | Setting property image_build_date: 2023-05-30 2026-01-17 01:19:49.622263 | orchestrator | 2026-01-17 01:19:31 | INFO  | Checking status of 'Cirros 0.6.2' 2026-01-17 01:19:49.622270 | orchestrator | 2026-01-17 01:19:31 | INFO  | Checking visibility of 'Cirros 0.6.2' 2026-01-17 01:19:49.622276 | orchestrator | 2026-01-17 01:19:31 | INFO  | Setting visibility of 'Cirros 0.6.2' to 'public' 2026-01-17 01:19:49.622284 | orchestrator | 2026-01-17 01:19:31 | INFO  | Processing image 'Cirros 0.6.3' 2026-01-17 01:19:49.622293 | orchestrator | 2026-01-17 01:19:32 | INFO  | Tested URL https://github.com/cirros-dev/cirros/releases/download/0.6.3/cirros-0.6.3-x86_64-disk.img: 302 2026-01-17 01:19:49.622301 | orchestrator | 2026-01-17 01:19:32 | INFO  | Importing image Cirros 0.6.3 2026-01-17 01:19:49.622307 | orchestrator | 2026-01-17 01:19:32 | INFO  | Importing from URL https://github.com/cirros-dev/cirros/releases/download/0.6.3/cirros-0.6.3-x86_64-disk.img 2026-01-17 01:19:49.622313 | orchestrator | 2026-01-17 01:19:32 | INFO  | Waiting for image to leave queued state... 2026-01-17 01:19:49.622372 | orchestrator | 2026-01-17 01:19:34 | INFO  | Waiting for import to complete... 2026-01-17 01:19:49.622400 | orchestrator | 2026-01-17 01:19:44 | INFO  | Import of 'Cirros 0.6.3' successfully completed, reloading images 2026-01-17 01:19:49.622409 | orchestrator | 2026-01-17 01:19:45 | INFO  | Checking parameters of 'Cirros 0.6.3' 2026-01-17 01:19:49.622416 | orchestrator | 2026-01-17 01:19:45 | INFO  | Setting internal_version = 0.6.3 2026-01-17 01:19:49.622422 | orchestrator | 2026-01-17 01:19:45 | INFO  | Setting image_original_user = cirros 2026-01-17 01:19:49.622429 | orchestrator | 2026-01-17 01:19:45 | INFO  | Adding tag os:cirros 2026-01-17 01:19:49.622436 | orchestrator | 2026-01-17 01:19:45 | INFO  | Setting property architecture: x86_64 2026-01-17 01:19:49.622443 | orchestrator | 2026-01-17 01:19:45 | INFO  | Setting property hw_disk_bus: scsi 2026-01-17 01:19:49.622450 | orchestrator | 2026-01-17 01:19:45 | INFO  | Setting property hw_rng_model: virtio 2026-01-17 01:19:49.622456 | orchestrator | 2026-01-17 01:19:45 | INFO  | Setting property hw_scsi_model: virtio-scsi 2026-01-17 01:19:49.622463 | orchestrator | 2026-01-17 01:19:46 | INFO  | Setting property hw_watchdog_action: reset 2026-01-17 01:19:49.622470 | orchestrator | 2026-01-17 01:19:46 | INFO  | Setting property hypervisor_type: qemu 2026-01-17 01:19:49.622477 | orchestrator | 2026-01-17 01:19:46 | INFO  | Setting property os_distro: cirros 2026-01-17 01:19:49.622485 | orchestrator | 2026-01-17 01:19:46 | INFO  | Setting property os_purpose: minimal 2026-01-17 01:19:49.622492 | orchestrator | 2026-01-17 01:19:46 | INFO  | Setting property replace_frequency: never 2026-01-17 01:19:49.622499 | orchestrator | 2026-01-17 01:19:47 | INFO  | Setting property uuid_validity: none 2026-01-17 01:19:49.622506 | orchestrator | 2026-01-17 01:19:47 | INFO  | Setting property provided_until: none 2026-01-17 01:19:49.622513 | orchestrator | 2026-01-17 01:19:47 | INFO  | Setting property image_description: Cirros 2026-01-17 01:19:49.622519 | orchestrator | 2026-01-17 01:19:47 | INFO  | Setting property image_name: Cirros 2026-01-17 01:19:49.622526 | orchestrator | 2026-01-17 01:19:47 | INFO  | Setting property internal_version: 0.6.3 2026-01-17 01:19:49.622542 | orchestrator | 2026-01-17 01:19:48 | INFO  | Setting property image_original_user: cirros 2026-01-17 01:19:49.622549 | orchestrator | 2026-01-17 01:19:48 | INFO  | Setting property os_version: 0.6.3 2026-01-17 01:19:49.622557 | orchestrator | 2026-01-17 01:19:48 | INFO  | Setting property image_source: https://github.com/cirros-dev/cirros/releases/download/0.6.3/cirros-0.6.3-x86_64-disk.img 2026-01-17 01:19:49.622564 | orchestrator | 2026-01-17 01:19:48 | INFO  | Setting property image_build_date: 2024-09-26 2026-01-17 01:19:49.622571 | orchestrator | 2026-01-17 01:19:48 | INFO  | Checking status of 'Cirros 0.6.3' 2026-01-17 01:19:49.622578 | orchestrator | 2026-01-17 01:19:48 | INFO  | Checking visibility of 'Cirros 0.6.3' 2026-01-17 01:19:49.622586 | orchestrator | 2026-01-17 01:19:48 | INFO  | Setting visibility of 'Cirros 0.6.3' to 'public' 2026-01-17 01:19:49.955974 | orchestrator | + sh -c /opt/configuration/scripts/bootstrap/301-openstack-octavia-amhpora-image.sh 2026-01-17 01:19:52.321942 | orchestrator | 2026-01-17 01:19:52 | INFO  | date: 2026-01-16 2026-01-17 01:19:52.322078 | orchestrator | 2026-01-17 01:19:52 | INFO  | image: octavia-amphora-haproxy-2024.2.20260116.qcow2 2026-01-17 01:19:52.322115 | orchestrator | 2026-01-17 01:19:52 | INFO  | url: https://nbg1.your-objectstorage.com/osism/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20260116.qcow2 2026-01-17 01:19:52.322125 | orchestrator | 2026-01-17 01:19:52 | INFO  | checksum_url: https://nbg1.your-objectstorage.com/osism/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20260116.qcow2.CHECKSUM 2026-01-17 01:20:52.423121 | orchestrator | 2026-01-17 01:20:52 | INFO  | checksum: localhost | ok: "/var/lib/zuul/builds/a542b7811fc84384a6deed4810765420/work/logs" 2026-01-17 01:21:24.815685 | orchestrator -> localhost | changed: "/var/lib/zuul/builds/a542b7811fc84384a6deed4810765420/work/artifacts" 2026-01-17 01:21:25.099633 | orchestrator -> localhost | changed: "/var/lib/zuul/builds/a542b7811fc84384a6deed4810765420/work/docs" 2026-01-17 01:21:25.124215 | 2026-01-17 01:21:25.124403 | LOOP [fetch-output : Collect logs, artifacts and docs] 2026-01-17 01:21:26.068088 | orchestrator | changed: .d..t...... ./ 2026-01-17 01:21:26.068437 | orchestrator | changed: All items complete 2026-01-17 01:21:26.068497 | 2026-01-17 01:21:26.752191 | orchestrator | changed: .d..t...... ./ 2026-01-17 01:21:27.466447 | orchestrator | changed: .d..t...... ./ 2026-01-17 01:21:27.498359 | 2026-01-17 01:21:27.498524 | LOOP [merge-output-to-logs : Move artifacts and docs to logs dir] 2026-01-17 01:21:27.535322 | orchestrator | skipping: Conditional result was False 2026-01-17 01:21:27.537670 | orchestrator | skipping: Conditional result was False 2026-01-17 01:21:27.564475 | 2026-01-17 01:21:27.564647 | PLAY RECAP 2026-01-17 01:21:27.564779 | orchestrator | ok: 3 changed: 2 unreachable: 0 failed: 0 skipped: 2 rescued: 0 ignored: 0 2026-01-17 01:21:27.564854 | 2026-01-17 01:21:27.699054 | POST-RUN END RESULT_NORMAL: [trusted : github.com/osism/zuul-config/playbooks/base/post-fetch.yaml@main] 2026-01-17 01:21:27.703771 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post.yaml@main] 2026-01-17 01:21:28.499250 | 2026-01-17 01:21:28.499415 | PLAY [Base post] 2026-01-17 01:21:28.514012 | 2026-01-17 01:21:28.514150 | TASK [remove-build-sshkey : Remove the build SSH key from all nodes] 2026-01-17 01:21:30.016179 | orchestrator | changed 2026-01-17 01:21:30.034192 | 2026-01-17 01:21:30.034375 | PLAY RECAP 2026-01-17 01:21:30.034482 | orchestrator | ok: 1 changed: 1 unreachable: 0 failed: 0 skipped: 0 rescued: 0 ignored: 0 2026-01-17 01:21:30.034591 | 2026-01-17 01:21:30.157601 | POST-RUN END RESULT_NORMAL: [trusted : github.com/osism/zuul-config/playbooks/base/post.yaml@main] 2026-01-17 01:21:30.160609 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post-logs.yaml@main] 2026-01-17 01:21:30.946075 | 2026-01-17 01:21:30.946252 | PLAY [Base post-logs] 2026-01-17 01:21:30.957492 | 2026-01-17 01:21:30.957631 | TASK [generate-zuul-manifest : Generate Zuul manifest] 2026-01-17 01:21:31.454293 | localhost | changed 2026-01-17 01:21:31.470496 | 2026-01-17 01:21:31.470671 | TASK [generate-zuul-manifest : Return Zuul manifest URL to Zuul] 2026-01-17 01:21:31.513434 | localhost | ok 2026-01-17 01:21:31.525877 | 2026-01-17 01:21:31.526089 | TASK [Set zuul-log-path fact] 2026-01-17 01:21:31.555219 | localhost | ok 2026-01-17 01:21:31.573676 | 2026-01-17 01:21:31.573867 | TASK [set-zuul-log-path-fact : Set log path for a build] 2026-01-17 01:21:31.601423 | localhost | ok 2026-01-17 01:21:31.607252 | 2026-01-17 01:21:31.607444 | TASK [upload-logs : Create log directories] 2026-01-17 01:21:32.114408 | localhost | changed 2026-01-17 01:21:32.121235 | 2026-01-17 01:21:32.121363 | TASK [upload-logs : Ensure logs are readable before uploading] 2026-01-17 01:21:32.646312 | localhost -> localhost | ok: Runtime: 0:00:00.007786 2026-01-17 01:21:32.656295 | 2026-01-17 01:21:32.656484 | TASK [upload-logs : Upload logs to log server] 2026-01-17 01:21:33.255511 | localhost | Output suppressed because no_log was given 2026-01-17 01:21:33.260189 | 2026-01-17 01:21:33.260425 | LOOP [upload-logs : Compress console log and json output] 2026-01-17 01:21:33.311245 | localhost | skipping: Conditional result was False 2026-01-17 01:21:33.316794 | localhost | skipping: Conditional result was False 2026-01-17 01:21:33.321265 | 2026-01-17 01:21:33.321374 | LOOP [upload-logs : Upload compressed console log and json output] 2026-01-17 01:21:33.379521 | localhost | skipping: Conditional result was False 2026-01-17 01:21:33.379801 | 2026-01-17 01:21:33.385307 | localhost | skipping: Conditional result was False 2026-01-17 01:21:33.388469 | 2026-01-17 01:21:33.388575 | LOOP [upload-logs : Upload console log and json output]